Added streaming

This commit is contained in:
Merve Noyan 2023-08-11 16:21:11 +03:00 committed by GitHub
parent 5df4c7c0d7
commit db2dd5229b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -33,6 +33,8 @@ client = InferenceClient(model=URL_TO_ENDPOINT_SERVING_TGI)
client.text_generation(prompt="Write a code for snake game", model=URL_TO_ENDPOINT_SERVING_TGI) client.text_generation(prompt="Write a code for snake game", model=URL_TO_ENDPOINT_SERVING_TGI)
``` ```
To stream tokens in `InferenceClient`, simply pass `stream=True`. Another parameter you can use with TGI backend is `details`. You can get more details on generation (tokens, probabilities, etc.) by `details` to `True`. By default, `details` is set to `False`, and `text_generation` only returns text output. If you set `details` and `stream` both as `True`, `text_generation` will return `TextGenerationStreamResponse` which consists of the generated token, generated text, and details.
You can check out the details of the function [here](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation). You can check out the details of the function [here](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation).