mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 14:52:20 +00:00
* feat: expand vlm support and add image token logic and tests * fix: avoid unused perceiver config * feat: integrate image tokens into inputs embeds * feat: add simple idefics3 test * feat: update docs, image token logic and weight names * fix: improve image processing * feat: improve prefix for idefics3 * fix: bump idefics3 tests and snapshots * fix: improve text model loading * feat: consolidate changes with existing vlms and add support and test for smolvlm * fix: create new idefic3 file, simplify logic and adjust llama weight loading * fix: lint with ruff * fix: clean up idefics 3 and improve prefix handling * fix: improve typing * fix: improve prompt_split_image with ref to original impl * fix: adjust ruff lints and small refactors * fix: adjust FlashLlamaModel prefix logic
3.1 KiB
3.1 KiB
Supported Models
Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported.
- Deepseek V2
- Idefics 2 (Multimodal)
- Idefics 3 (Multimodal)
- Llava Next (1.6) (Multimodal)
- Llama
- Phi 3
- Granite
- Gemma
- PaliGemma
- Gemma2
- Cohere
- Dbrx
- Mamba
- Mistral
- Mixtral
- Gpt Bigcode
- Phi
- PhiMoe
- Baichuan
- Falcon
- StarCoder 2
- Qwen 2
- Qwen 2 VL
- Opt
- T5
- Galactica
- SantaCoder
- Bloom
- Mpt
- Gpt2
- Gpt Neox
- Gptj
- Idefics (Multimodal)
- Mllama (Multimodal)
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
# for causal LMs/text-generation models
AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")
# or, for text-to-text generation models
AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")
If you wish to serve a supported model that already exists on a local folder, just point to the local folder.
text-generation-launcher --model-id <PATH-TO-LOCAL-BLOOM>