mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-09-10 20:04:52 +00:00
Added note to install huggingface-hub
This commit is contained in:
parent
9a17f30042
commit
15175839b4
@ -16,11 +16,15 @@ curl 127.0.0.1:8080/generate \
|
||||
|
||||
## Inference Client
|
||||
|
||||
TODO: Add some installation note
|
||||
|
||||
[`huggingface-hub`](https://huggingface.co/docs/huggingface_hub/main/en/index) is a Python library to interact with the Hugging Face Hub, including its endpoints. It provides a nice high-level class, [`~huggingface_hub.InferenceClient`], which makes it easy to make calls to a TGI endpoint. `InferenceClient` also takes care of parameter validation and provides a simple to-use interface.
|
||||
|
||||
Once you start the TGI server, instantiate `InferenceClient()` with the URL to the endpoint serving the model. You can then call `text_generation()` to hit the endpoint through Python.
|
||||
You can simply install `huggingface-hub` library with pip.
|
||||
|
||||
```python
|
||||
pip install huggingface-hub
|
||||
```
|
||||
|
||||
Once you start the TGI server, instantiate `InferenceClient()` with the URL to the endpoint serving the model. You can then call `text_generation()` to hit the endpoint through Python.
|
||||
|
||||
```python
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user