fix: rename header

This commit is contained in:
drbh 2024-04-29 20:20:30 +00:00
parent 0d03620500
commit e07d0ebc06

View File

@ -1,4 +1,4 @@
# Vision Language Models (VLM) # Vision Language Model Inference in TGI
Visual Language Model (VLM) are models that consume both image and text inputs to generate text. Visual Language Model (VLM) are models that consume both image and text inputs to generate text.
@ -17,7 +17,7 @@ Below are couple of common use cases for vision language models:
### Hugging Face Hub Python Library ### Hugging Face Hub Python Library
To infer with vision language models through Python, you can use the [`huggingface_hub`](https://pypi.org/project/huggingface-hub/) library. The `InferenceClient` class provides a simple way to interact with the [Inference API](https://huggingface.co/docs/api-inference/index) To infer with vision language models through Python, you can use the [`huggingface_hub`](https://pypi.org/project/huggingface-hub/) library. The `InferenceClient` class provides a simple way to interact with the [Inference API](https://huggingface.co/docs/api-inference/index). Images can be passed as URLs or base64-encoded strings. The `InferenceClient` will automatically detect the image format.
```python ```python
from huggingface_hub import InferenceClient from huggingface_hub import InferenceClient
@ -31,8 +31,6 @@ for token in client.text_generation(prompt, max_new_tokens=16, stream=True):
# This is a picture of an anthropomorphic rabbit in a space suit. # This is a picture of an anthropomorphic rabbit in a space suit.
``` ```
Images can be passed as URLs or base64-encoded strings. The `InferenceClient` will automatically detect the image format.
```python ```python
from huggingface_hub import InferenceClient from huggingface_hub import InferenceClient
import base64 import base64