text-generation-inference/server/marlin
Daniël de Kok 4700ea413f Add support for Marlin 2:4 sparsity (#2102)
This change adds support for 2:4 sparsity when using Marlin
quantization. The 2:4 kernel is used when:

* The quantizer is `marlin`;
* the quantizer checkpoint format is `marlin_24`.

Fixes #2098.
2024-09-24 03:55:04 +00:00
..
marlin_kernels Add support for Marlin 2:4 sparsity (#2102) 2024-09-24 03:55:04 +00:00
COPYRIGHT Add support for GPTQ Marlin (#2052) 2024-09-24 03:43:30 +00:00
setup.py Add support for Marlin 2:4 sparsity (#2102) 2024-09-24 03:55:04 +00:00