text-generation-inference/server/text_generation_server/models
Daniël de Kok 571ac9b507
Use kernels from the kernel hub (#2988)
* Use Hub kernels for Marlin and cutlass quantization kernels

* Use hub kernels for MoE/GPTQ-Marlin MoE

* Use attention kernels from the Hub

* Cache the kernels in the Docker image

* Update moe kernels

* Support loading local kernels for development

* Support latest moe kernels

* Update to moe 0.1.1

* CI: download locked kernels for server tests

* Fixup some imports

* CI: activate venv

* Fix unused imports

* Nix: add attention/moe/quantization kernels

* Update hf-kernels to 0.1.5

* Update kernels

* Update tgi-nix flake for hf-kernels

* Fix EOF

* Take `load_kernel` out of a frequently-called function

* Hoist another case of kernel loading out of a somewhat hot function

* marlin-kernels -> quantization

* attention -> paged-attention

* EOF fix

* Update hf-kernels, fixup Docker

* ipex fix

* Remove outdated TODO
2025-02-10 19:19:25 +01:00
..
custom_modeling Use kernels from the kernel hub (#2988) 2025-02-10 19:19:25 +01:00
__init__.py Improve qwen vl impl (#2943) 2025-02-04 12:44:18 -05:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-07-05 10:29:56 +02:00
causal_lm.py Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
flash_causal_lm.py Improve qwen vl impl (#2943) 2025-02-04 12:44:18 -05:00
galactica.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
globals.py Fixing the oom maybe with 2.5.1 change. (#2958) 2025-01-28 10:30:28 +01:00
idefics_causal_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
mamba.py Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
metadata_kernels.py feat: add payload limit (#2726) 2024-11-21 18:20:15 +00:00
mllama_causal_lm.py feat: add triton kernels to decrease latency of large batches (#2687) 2024-10-25 21:10:00 +00:00
model.py Flash decoding kernel adding and prefill-chunking and prefix caching enabling in intel cpu/xpu (#2815) 2025-01-17 12:04:57 +01:00
pali_gemma.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
seq2seq_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
transformers_flash_causal_lm.py Transformers backend TP fix (#2945) 2025-01-23 18:09:57 +01:00
types.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
vlm_causal_lm.py Revert "feat: improve qwen2-vl startup " (#2924) 2025-01-17 12:09:05 -05:00