text-generation-inference/server/text_generation_server/layers/gptq
Daniël de Kok ee56266044 Use symmetric quantization in the quantize subcommand (#2120)
Packing of asymmetric quantization is broken, all (q)zeros values
of `0` get reset to `1`, resulting in a loss of accuracy. So instead
use symmetric quantization. To be able to distinguish models with
symmetric and asymmetric quantization, a new config tensor `gptq_sym` is
added. If this tensor is not present, we assume `sym=False`.
2024-09-25 05:27:40 +00:00
..
__init__.py Use symmetric quantization in the quantize subcommand (#2120) 2024-09-25 05:27:40 +00:00
custom_autotune.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
exllama.py Fix GPTQWeight import (#2020) 2024-09-24 03:34:15 +00:00
exllamav2.py Do not initialize scratch space when there are no ExLlamaV2 layers (#2015) 2024-09-24 03:32:55 +00:00
quant_linear.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
quantize.py Use symmetric quantization in the quantize subcommand (#2120) 2024-09-25 05:27:40 +00:00