Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
tuelwer commited on
Commit
3deb624
·
verified ·
1 Parent(s): 9ad497c

Fix print statement in example snippet

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -149,7 +149,7 @@ completion = client.chat.completions.create(
149
  messages=[{"role": "User", "content": "Hallo"}],
150
  extra_body={"chat_template":"DE"}
151
  )
152
- print(f"Assistant: {completion]")
153
  ```
154
  The default language of the Chat-Template can also be set when starting the vLLM Server. For this create a new file with the name `lang` and the content `DE` and start the vLLM Server as follows:
155
  ``` shell
 
149
  messages=[{"role": "User", "content": "Hallo"}],
150
  extra_body={"chat_template":"DE"}
151
  )
152
+ print(f"Assistant: {completion}")
153
  ```
154
  The default language of the Chat-Template can also be set when starting the vLLM Server. For this create a new file with the name `lang` and the content `DE` and start the vLLM Server as follows:
155
  ``` shell