mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-09-10 11:54:52 +00:00
Update consuming_tgi.md
This commit is contained in:
parent
f7c49f612b
commit
982d6709fe
@ -1,5 +1,7 @@
|
||||
# Consuming Text Generation Inference
|
||||
|
||||
There are many ways you can consume Text Generation Inference server in your applications. Two of them are built by Hugging Face, ChatUI is the open-source front-end for Text Generation Inference, and [~huggingface_hub.InferenceClient] is a robust and detailed API to infer hosted TGI servers.
|
||||
|
||||
## ChatUI
|
||||
|
||||
ChatUI is the open-source interface built for large language model serving. It offers many customization options, such as web search with SERP API and more. ChatUI can automatically consume the TGI server and even provides an option to switch between different TGI endpoints. You can try it out at [Hugging Chat](https://huggingface.co/chat/), or use the [ChatUI Docker Space](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) to deploy your own Hugging Chat to Spaces.
|
||||
@ -23,4 +25,4 @@ client = InferenceClient(model=URL_TO_ENDPOINT_SERVING_TGI)
|
||||
client.text_generation(prompt="Write a code for snake game", model=URL_TO_ENDPOINT_SERVING_TGI)
|
||||
```
|
||||
|
||||
You can check out the details of the function [here](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation).
|
||||
You can check out the details of the function [here](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation).
|
||||
|
Loading…
Reference in New Issue
Block a user