From 921448cfdfcaef99ced66fe10cc3ed04606376bf Mon Sep 17 00:00:00 2001 From: Vaibhav Srivastav Date: Wed, 14 Aug 2024 11:11:47 +0200 Subject: [PATCH] Apply suggestions from code review Co-authored-by: Omar Sanseviero --- docs/source/basic_tutorials/consuming_tgi.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/source/basic_tutorials/consuming_tgi.md b/docs/source/basic_tutorials/consuming_tgi.md index 6e562226..1d776a1a 100644 --- a/docs/source/basic_tutorials/consuming_tgi.md +++ b/docs/source/basic_tutorials/consuming_tgi.md @@ -4,7 +4,7 @@ There are many ways to consume Text Generation Inference (TGI) server in your ap For more information on the API, consult the OpenAPI documentation of `text-generation-inference` available [here](https://huggingface.github.io/text-generation-inference). -You can make the requests using any tool of your preference, such as curl, Python or TypeScript. For an end-to-end experience, we've open-sourced ChatUI, a chat interface for open-source models. +You can make the requests using any tool of your preference, such as curl, Python, or TypeScript. For an end-to-end experience, we've open-sourced ChatUI, a chat interface for open-access models. ## curl @@ -70,7 +70,7 @@ for message in chat_completion: ### Inference Client -[`huggingface-hub`](https://huggingface.co/docs/huggingface_hub/main/en/index) is a Python library to interact with the Hugging Face Hub, including its endpoints. It provides a high-level class, [`huggingface_hub.InferenceClient`](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient), which makes it easy to make calls to TGI's Messages API. `InferenceClient` also takes care of parameter validation and provides a simple to-use interface. +[`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/main/en/index) is a Python library to interact with the Hugging Face Hub, including its endpoints. It provides a high-level class, [`huggingface_hub.InferenceClient`](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient), which makes it easy to make calls to TGI's Messages API. `InferenceClient` also takes care of parameter validation and provides a simple-to-use interface. Install `huggingface_hub` package via pip. @@ -90,7 +90,6 @@ You can now use `InferenceClient` the exact same way you would use `OpenAI` clie api_key=..., ) - output = client.chat.completions.create( model="tgi", messages=[