text-generation-inference/server/text_generation_server/models/custom_modeling
Daniël de Kok 571ac9b507
Use kernels from the kernel hub (#2988)
* Use Hub kernels for Marlin and cutlass quantization kernels

* Use hub kernels for MoE/GPTQ-Marlin MoE

* Use attention kernels from the Hub

* Cache the kernels in the Docker image

* Update moe kernels

* Support loading local kernels for development

* Support latest moe kernels

* Update to moe 0.1.1

* CI: download locked kernels for server tests

* Fixup some imports

* CI: activate venv

* Fix unused imports

* Nix: add attention/moe/quantization kernels

* Update hf-kernels to 0.1.5

* Update kernels

* Update tgi-nix flake for hf-kernels

* Fix EOF

* Take `load_kernel` out of a frequently-called function

* Hoist another case of kernel loading out of a somewhat hot function

* marlin-kernels -> quantization

* attention -> paged-attention

* EOF fix

* Update hf-kernels, fixup Docker

* ipex fix

* Remove outdated TODO
2025-02-10 19:19:25 +01:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py Fixing auto bloom test. (#2699) 2024-10-28 06:14:11 +01:00
clip.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
flash_cohere_modeling.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
flash_dbrx_modeling.py Use kernels from the kernel hub (#2988) 2025-02-10 19:19:25 +01:00
flash_deepseek_v2_modeling.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
flash_deepseek_v3_modeling.py Add deepseekv3 (#2968) 2025-01-30 16:40:25 +01:00
flash_gemma2_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_gemma_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_gpt2_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_gptj_modeling.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
flash_llama_modeling.py fix the crash of meta-llama/Llama-3.2-1B (#2918) 2025-01-17 15:50:58 +01:00
flash_mistral_modeling.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
flash_mixtral_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_neox_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_pali_gemma_modeling.py Support qwen2 vl (#2689) 2024-10-30 12:40:51 -04:00
flash_phi_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_phi_moe_modeling.py feat: support phi3.5 moe (#2479) 2024-09-30 11:15:09 +02:00
flash_qwen2_modeling.py Improve qwen vl impl (#2943) 2025-02-04 12:44:18 -05:00
flash_rw_modeling.py Using both value from config as they might not be correct. (#2817) 2024-12-10 19:37:09 +01:00
flash_santacoder_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_starcoder2_modeling.py feat: improve star coder to support multi lora layers (#2883) 2025-01-16 16:23:55 -05:00
idefics2.py Support qwen2 vl (#2689) 2024-10-30 12:40:51 -04:00
idefics3.py Improve vlm support (add idefics3 support) (#2437) 2025-01-09 10:35:32 -05:00
idefics_config.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
idefics_image_processing.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_modeling.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
idefics_perceiver.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_processing.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_vision.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
llava_next.py Support qwen2 vl (#2689) 2024-10-30 12:40:51 -04:00
mamba_modeling.py Fix: Change embeddings to embedding (#2738) 2024-11-15 13:16:15 +01:00
mllama.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
mpt_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
neox_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
opt_modeling.py Fixup opt to reduce the amount of odd if statements. (#2833) 2024-12-12 18:20:13 +01:00
phi_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
qwen2_vl.py Improve qwen vl impl (#2943) 2025-02-04 12:44:18 -05:00
siglip.py Fix: don't apply post layernorm in SiglipVisionTransformer (#2459) 2024-08-26 17:04:46 -04:00
t5_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
vlm.py Improve vlm support (add idefics3 support) (#2437) 2025-01-09 10:35:32 -05:00