From 15175839b403a52c03ff42270bde5838a90a2051 Mon Sep 17 00:00:00 2001 From: Merve Noyan Date: Wed, 9 Aug 2023 17:09:13 +0300 Subject: [PATCH] Added note to install huggingface-hub --- docs/source/basic_tutorials/consuming_tgi.md | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/docs/source/basic_tutorials/consuming_tgi.md b/docs/source/basic_tutorials/consuming_tgi.md index 10c69cf7..92fcb290 100644 --- a/docs/source/basic_tutorials/consuming_tgi.md +++ b/docs/source/basic_tutorials/consuming_tgi.md @@ -16,11 +16,15 @@ curl 127.0.0.1:8080/generate \ ## Inference Client -TODO: Add some installation note - [`huggingface-hub`](https://huggingface.co/docs/huggingface_hub/main/en/index) is a Python library to interact with the Hugging Face Hub, including its endpoints. It provides a nice high-level class, [`~huggingface_hub.InferenceClient`], which makes it easy to make calls to a TGI endpoint. `InferenceClient` also takes care of parameter validation and provides a simple to-use interface. - Once you start the TGI server, instantiate `InferenceClient()` with the URL to the endpoint serving the model. You can then call `text_generation()` to hit the endpoint through Python. +You can simply install `huggingface-hub` library with pip. + +```python +pip install huggingface-hub +``` + +Once you start the TGI server, instantiate `InferenceClient()` with the URL to the endpoint serving the model. You can then call `text_generation()` to hit the endpoint through Python. ```python