text-generation-inference/server/text_generation_server/layers/attention
Daniël de Kok 571ac9b507
Use kernels from the kernel hub (#2988)
* Use Hub kernels for Marlin and cutlass quantization kernels

* Use hub kernels for MoE/GPTQ-Marlin MoE

* Use attention kernels from the Hub

* Cache the kernels in the Docker image

* Update moe kernels

* Support loading local kernels for development

* Support latest moe kernels

* Update to moe 0.1.1

* CI: download locked kernels for server tests

* Fixup some imports

* CI: activate venv

* Fix unused imports

* Nix: add attention/moe/quantization kernels

* Update hf-kernels to 0.1.5

* Update kernels

* Update tgi-nix flake for hf-kernels

* Fix EOF

* Take `load_kernel` out of a frequently-called function

* Hoist another case of kernel loading out of a somewhat hot function

* marlin-kernels -> quantization

* attention -> paged-attention

* EOF fix

* Update hf-kernels, fixup Docker

* ipex fix

* Remove outdated TODO
2025-02-10 19:19:25 +01:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Use kernels from the kernel hub (#2988) 2025-02-10 19:19:25 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py flashinfer: switch to plan API (#2904) 2025-01-17 18:18:02 +01:00
ipex.py Flash decoding kernel adding and prefill-chunking and prefix caching enabling in intel cpu/xpu (#2815) 2025-01-17 12:04:57 +01:00
kv_cache.py Use kernels from the kernel hub (#2988) 2025-02-10 19:19:25 +01:00
rocm.py Add fp8 kv cache for ROCm (#2856) 2025-01-17 18:43:29 +05:30