mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-24 08:22:07 +00:00
This change add support for MoE models that use GPTQ quantization. Currently only models with the following properties are supported: - No `desc_act` with tensor parallelism, unless `group_size=-1`. - No asymmetric quantization. - No AWQ. |
||
---|---|---|
.. | ||
__init__.py | ||
fused_moe_rocm.py | ||
gptq_marlin.py | ||
unquantized.py |