mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-09-12 04:44:52 +00:00
Small improvements for docs
This commit is contained in:
parent
8f99f165ce
commit
e1e8b6d9c0
@ -16,7 +16,7 @@
|
|||||||
- local: installation
|
- local: installation
|
||||||
title: Installation from source
|
title: Installation from source
|
||||||
- local: supported_models
|
- local: supported_models
|
||||||
title: Supported Models and Hardware
|
title: Supported Models
|
||||||
- local: architecture
|
- local: architecture
|
||||||
title: Internal Architecture
|
title: Internal Architecture
|
||||||
- local: usage_statistics
|
- local: usage_statistics
|
||||||
|
@ -1,9 +1,7 @@
|
|||||||
|
|
||||||
# Supported Models and Hardware
|
# Supported Models
|
||||||
|
|
||||||
Text Generation Inference enables serving optimized models on specific hardware for the highest performance. The following sections list which models (VLMs & LLMs) are supported.
|
Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported.
|
||||||
|
|
||||||
## Supported Models
|
|
||||||
|
|
||||||
- [Deepseek V2](https://huggingface.co/deepseek-ai/DeepSeek-V2)
|
- [Deepseek V2](https://huggingface.co/deepseek-ai/DeepSeek-V2)
|
||||||
- [Idefics 2](https://huggingface.co/HuggingFaceM4/idefics2-8b) (Multimodal)
|
- [Idefics 2](https://huggingface.co/HuggingFaceM4/idefics2-8b) (Multimodal)
|
||||||
@ -36,17 +34,4 @@ Text Generation Inference enables serving optimized models on specific hardware
|
|||||||
- [Idefics](https://huggingface.co/HuggingFaceM4/idefics-9b) (Multimodal)
|
- [Idefics](https://huggingface.co/HuggingFaceM4/idefics-9b) (Multimodal)
|
||||||
|
|
||||||
|
|
||||||
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
|
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models. Read more about [Non-core Model Serving](../basic_tutorials/non_core_models).
|
||||||
|
|
||||||
```python
|
|
||||||
# for causal LMs/text-generation models
|
|
||||||
AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")`
|
|
||||||
# or, for text-to-text generation models
|
|
||||||
AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")
|
|
||||||
```
|
|
||||||
|
|
||||||
If you wish to serve a supported model that already exists on a local folder, just point to the local folder.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
text-generation-launcher --model-id <PATH-TO-LOCAL-BLOOM>
|
|
||||||
```
|
|
@ -5,13 +5,10 @@ import json
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
TEMPLATE = """
|
TEMPLATE = """
|
||||||
# Supported Models and Hardware
|
# Supported Models
|
||||||
|
|
||||||
Text Generation Inference enables serving optimized models on specific hardware for the highest performance. The following sections list which models (VLMs & LLMs) are supported.
|
Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported.
|
||||||
|
|
||||||
## Supported Models
|
|
||||||
|
|
||||||
SUPPORTED_MODELS
|
|
||||||
|
|
||||||
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
|
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user