mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-09-12 04:44:52 +00:00
Updating the doc (we keep the list actually).
This commit is contained in:
parent
3d5f10701d
commit
406725e05f
@ -34,4 +34,18 @@ Text Generation Inference enables serving optimized models. The following sectio
|
||||
- [Idefics](https://huggingface.co/HuggingFaceM4/idefics-9b) (Multimodal)
|
||||
|
||||
|
||||
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models. Read more about [Non-core Model Serving](../basic_tutorials/non_core_models).
|
||||
|
||||
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
|
||||
|
||||
```python
|
||||
# for causal LMs/text-generation models
|
||||
AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")`
|
||||
# or, for text-to-text generation models
|
||||
AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")
|
||||
```
|
||||
|
||||
If you wish to serve a supported model that already exists on a local folder, just point to the local folder.
|
||||
|
||||
```bash
|
||||
text-generation-launcher --model-id <PATH-TO-LOCAL-BLOOM>
|
||||
```
|
||||
|
@ -9,6 +9,8 @@ TEMPLATE = """
|
||||
|
||||
Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported.
|
||||
|
||||
SUPPORTED_MODELS
|
||||
|
||||
|
||||
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user