diff --git a/docs/source/basic_tutorials/consuming_tgi.md b/docs/source/basic_tutorials/consuming_tgi.md index f8b80b6e..ce66c07b 100644 --- a/docs/source/basic_tutorials/consuming_tgi.md +++ b/docs/source/basic_tutorials/consuming_tgi.md @@ -1,5 +1,7 @@ # Consuming Text Generation Inference +There are many ways you can consume Text Generation Inference server in your applications. Two of them are built by Hugging Face, ChatUI is the open-source front-end for Text Generation Inference, and [~huggingface_hub.InferenceClient] is a robust and detailed API to infer hosted TGI servers. + ## ChatUI ChatUI is the open-source interface built for large language model serving. It offers many customization options, such as web search with SERP API and more. ChatUI can automatically consume the TGI server and even provides an option to switch between different TGI endpoints. You can try it out at [Hugging Chat](https://huggingface.co/chat/), or use the [ChatUI Docker Space](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) to deploy your own Hugging Chat to Spaces. @@ -23,4 +25,4 @@ client = InferenceClient(model=URL_TO_ENDPOINT_SERVING_TGI) client.text_generation(prompt="Write a code for snake game", model=URL_TO_ENDPOINT_SERVING_TGI) ``` -You can check out the details of the function [here](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation). \ No newline at end of file +You can check out the details of the function [here](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation).