mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-22 23:42:06 +00:00
fix library
This commit is contained in:
parent
69c3d79a1c
commit
eccb8a0099
@ -1,6 +1,6 @@
|
|||||||
# Stream responses in Javascript and Python
|
# Stream responses in Javascript and Python
|
||||||
|
|
||||||
Requesting and generating text with LLMs can be a time-consuming and iterative process. A great way to improve the user experience is streaming tokens to the user as they are generated. Below are two examples of how to stream tokens using Python and JavaScript. For Python, we are going to use the **[client from Text Generation Inference](https://github.com/huggingface/text-generation-inference/tree/main/clients/python)**, and for JavaScript, the **[HuggingFace.js library](https://huggingface.co/docs/huggingface.js/main/en/index)**
|
Requesting and generating text with LLMs can be a time-consuming and iterative process. A great way to improve the user experience is streaming tokens to the user as they are generated. Below are two examples of how to stream tokens using Python and JavaScript. For Python, we are going to use the **[huggingface_hub library](https://huggingface.co/docs/huggingface_hub/index), and for JavaScript, the [HuggingFace.js library](https://huggingface.co/docs/huggingface.js/main/en/index)
|
||||||
|
|
||||||
## Streaming requests with Python
|
## Streaming requests with Python
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user