Small improvements for docs

This commit is contained in:
osanseviero 2024-08-29 11:24:00 +02:00
parent 8f99f165ce
commit e1e8b6d9c0
No known key found for this signature in database
GPG Key ID: 9652002F5D72EDD6
3 changed files with 6 additions and 24 deletions

View File

@ -16,7 +16,7 @@
- local: installation - local: installation
title: Installation from source title: Installation from source
- local: supported_models - local: supported_models
title: Supported Models and Hardware title: Supported Models
- local: architecture - local: architecture
title: Internal Architecture title: Internal Architecture
- local: usage_statistics - local: usage_statistics

View File

@ -1,9 +1,7 @@
# Supported Models and Hardware # Supported Models
Text Generation Inference enables serving optimized models on specific hardware for the highest performance. The following sections list which models (VLMs & LLMs) are supported. Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported.
## Supported Models
- [Deepseek V2](https://huggingface.co/deepseek-ai/DeepSeek-V2) - [Deepseek V2](https://huggingface.co/deepseek-ai/DeepSeek-V2)
- [Idefics 2](https://huggingface.co/HuggingFaceM4/idefics2-8b) (Multimodal) - [Idefics 2](https://huggingface.co/HuggingFaceM4/idefics2-8b) (Multimodal)
@ -36,17 +34,4 @@ Text Generation Inference enables serving optimized models on specific hardware
- [Idefics](https://huggingface.co/HuggingFaceM4/idefics-9b) (Multimodal) - [Idefics](https://huggingface.co/HuggingFaceM4/idefics-9b) (Multimodal)
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models: If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models. Read more about [Non-core Model Serving](../basic_tutorials/non_core_models).
```python
# for causal LMs/text-generation models
AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")`
# or, for text-to-text generation models
AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")
```
If you wish to serve a supported model that already exists on a local folder, just point to the local folder.
```bash
text-generation-launcher --model-id <PATH-TO-LOCAL-BLOOM>
```

View File

@ -5,13 +5,10 @@ import json
import os import os
TEMPLATE = """ TEMPLATE = """
# Supported Models and Hardware # Supported Models
Text Generation Inference enables serving optimized models on specific hardware for the highest performance. The following sections list which models (VLMs & LLMs) are supported. Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported.
## Supported Models
SUPPORTED_MODELS
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models: If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models: