text-generation-inference/server/text_generation_server/layers/gptq
Daniël de Kok 247a29f77c server quantize: store quantizer config in standard format (#2299)
- Create `quantization_config` option in the model config.
- Don't store the quantizer config in tensors anymore.
2024-09-25 05:50:17 +00:00
..
__init__.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
custom_autotune.py Some small fixes for the Torch 2.4.0 update (#2304) 2024-09-25 05:40:25 +00:00
exllama.py Fix GPTQWeight import (#2020) 2024-09-24 03:34:15 +00:00
exllamav2.py Do not initialize scratch space when there are no ExLlamaV2 layers (#2015) 2024-09-24 03:32:55 +00:00
quant_linear.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
quantize.py server quantize: store quantizer config in standard format (#2299) 2024-09-25 05:50:17 +00:00
utils.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00