text-generation-inference/server/text_generation_server/layers/gptq
Daniël de Kok 26460f053d Add support for repacking AWQ weights for GPTQ-Marlin (#2278)
* Add support for repacking AWQ weights for GPTQ-Marlin

So far we couldn't support AWQ because virtually all AWQ models use
symmetric quantization, which GPTQ-Marlin did not suppors. GPTQ-Marlin
has recently added support AWQ repacking and AWQ asymmetric quantization
(zero_point=True).

This change updates all GPTQ-Marlin kernels from upstream and wires up
AWQ support. For now enabling AWQ using Marlin requires running TGI with
`--quantize gptq`.

* Enable Marlin for supported AWQ configurations by default

This makes the AWQ -> GPTQ repack test redundant, since we are now
testing this with the regular AWQ test.
2024-09-25 05:31:31 +00:00
..
__init__.py Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-09-25 05:31:31 +00:00
custom_autotune.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
exllama.py Fix GPTQWeight import (#2020) 2024-09-24 03:34:15 +00:00
exllamav2.py Do not initialize scratch space when there are no ExLlamaV2 layers (#2015) 2024-09-24 03:32:55 +00:00
quant_linear.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
quantize.py Use symmetric quantization in the quantize subcommand (#2120) 2024-09-25 05:27:40 +00:00