text-generation-inference/server/text_generation_server/layers/compressed_tensors
Daniël de Kok 571ac9b507
Use kernels from the kernel hub (#2988)
* Use Hub kernels for Marlin and cutlass quantization kernels

* Use hub kernels for MoE/GPTQ-Marlin MoE

* Use attention kernels from the Hub

* Cache the kernels in the Docker image

* Update moe kernels

* Support loading local kernels for development

* Support latest moe kernels

* Update to moe 0.1.1

* CI: download locked kernels for server tests

* Fixup some imports

* CI: activate venv

* Fix unused imports

* Nix: add attention/moe/quantization kernels

* Update hf-kernels to 0.1.5

* Update kernels

* Update tgi-nix flake for hf-kernels

* Fix EOF

* Take `load_kernel` out of a frequently-called function

* Hoist another case of kernel loading out of a somewhat hot function

* marlin-kernels -> quantization

* attention -> paged-attention

* EOF fix

* Update hf-kernels, fixup Docker

* ipex fix

* Remove outdated TODO
2025-02-10 19:19:25 +01:00
..
__init__.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
loader.py Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00
w8a8_int.py Use kernels from the kernel hub (#2988) 2025-02-10 19:19:25 +01:00
w8an_fp.py Add fp8 support moe models (#2928) 2025-01-29 13:56:32 +01:00
wna16_int_24.py Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00
wna16_int.py Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00