text-generation-inference/server/text_generation_server/layers/gptq
jiqing-feng cae0cbe87d
Add modules_to_not_convert in quantized model (#3053)
* fix modules_to_not_convert

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix format

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix tp quant skip

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* revert unquantized changes

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* use DefaultWeightsLoader in skip modules

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

---------

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
2025-03-10 15:03:51 +01:00
..
__init__.py Add modules_to_not_convert in quantized model (#3053) 2025-03-10 15:03:51 +01:00
custom_autotune.py Some small fixes for the Torch 2.4.0 update (#2304) 2024-07-25 13:34:44 +02:00
exllama.py Fix GPTQWeight import (#2020) 2024-06-05 14:49:15 +02:00
exllamav2.py Upgrading exl2. (#2415) 2024-08-14 11:58:08 +02:00
ipex.py fix incorrect output of Qwen2-7B-Instruct-GPTQ-Int4 and Qwen2-7B-Inst… (#2717) 2024-11-04 16:07:51 +01:00
quantize.py Flash Transformers modeling backend support (#2913) 2025-01-21 10:01:51 +01:00
triton.py Fixing rocm gptq by using triton code too (renamed cuda into triton). (#2691) 2024-10-25 09:17:57 +02:00
utils.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00