mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-24 00:12:08 +00:00
FIxing the CI.
This commit is contained in:
parent
85df9fc2db
commit
11d25a4bd3
@ -2094,4 +2094,4 @@
|
||||
"description": "Hugging Face Text Generation Inference API"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -88,7 +88,7 @@ There is also an async version of the client, `AsyncInferenceClient`, based on `
|
||||
|
||||
You can directly use the OpenAI [Python](https://github.com/openai/openai-python) or [JS](https://github.com/openai/openai-node) clients to interact with TGI.
|
||||
|
||||
Install the OpenAI Python package via pip.
|
||||
Install the OpenAI Python package via pip.
|
||||
|
||||
```bash
|
||||
pip install openai
|
||||
@ -145,7 +145,7 @@ def inference(message, history):
|
||||
stream=True,
|
||||
max_tokens=1024,
|
||||
)
|
||||
|
||||
|
||||
for chunk in output:
|
||||
partial_message += chunk.choices[0].delta.content
|
||||
yield partial_message
|
||||
@ -196,4 +196,4 @@ To serve both ChatUI and TGI in same environment, simply add your own endpoints
|
||||
}
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
Loading…
Reference in New Issue
Block a user