text-generation-inference/server/text_generation_server/models/custom_modeling
Daniël de Kok 77ac0f364b Add support for Marlin-quantized models
This change adds support for Marlin-quantized models. Marlin is an
FP16xINT4 matmul kernel, which provides good speedups decoding batches
of 16-32 tokens. It supports quantized models with symmetric
quantization, groupsize -1 or 128, and 4-bit.

Tested with:

- Llama 2
- Llama 3
- Phi 3
2024-09-24 03:38:05 +00:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
clip.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
flash_cohere_modeling.py feat: move allocation logic to rust (#1835) 2024-09-24 03:34:15 +00:00
flash_dbrx_modeling.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00
flash_gemma_modeling.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00
flash_gpt2_modeling.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00
flash_llama_modeling.py Fixing Phi3. 2024-09-24 03:26:17 +00:00
flash_mistral_modeling.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
flash_mixtral_modeling.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00
flash_neox_modeling.py feat: move allocation logic to rust (#1835) 2024-09-24 03:34:15 +00:00
flash_pali_gemma_modeling.py Pali gemma modeling (#1895) 2024-07-17 05:36:58 +00:00
flash_phi_modeling.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00
flash_qwen2_modeling.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00
flash_rw_modeling.py feat: move allocation logic to rust (#1835) 2024-09-24 03:34:15 +00:00
flash_santacoder_modeling.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00
flash_starcoder2_modeling.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
idefics2.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
idefics_config.py chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
idefics_image_processing.py chore: formatting 2024-04-18 16:26:00 +03:00
idefics_modeling.py reenable xpu for tgi (#1939) 2024-07-17 05:36:58 +00:00
idefics_perceiver.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
idefics_processing.py chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
idefics_vision.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
llava_next.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
mamba_modeling.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
mpt_modeling.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
neox_modeling.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
opt_modeling.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
phi_modeling.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
siglip.py Removing some unused code. (#1915) 2024-07-17 05:36:58 +00:00
t5_modeling.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
vlm.py Pali gemma modeling (#1895) 2024-07-17 05:36:58 +00:00