text-generation-inference/server/text_generation_server/layers/moe
Daniël de Kok 288bcb0027 Add support for GPTQ-quantized MoE models using MoE Marlin (#2557)
This change add support for MoE models that use GPTQ quantization.
Currently only models with the following properties are supported:

- No `desc_act` with tensor parallelism, unless `group_size=-1`.
- No asymmetric quantization.
- No AWQ.
2024-10-25 09:07:52 +00:00
..
__init__.py Add support for GPTQ-quantized MoE models using MoE Marlin (#2557) 2024-10-25 09:07:52 +00:00
fused_moe_rocm.py Update ROCM libs and improvements (#2579) 2024-10-25 09:01:04 +00:00
gptq_marlin.py Add support for GPTQ-quantized MoE models using MoE Marlin (#2557) 2024-10-25 09:07:52 +00:00
unquantized.py Update ROCM libs and improvements (#2579) 2024-10-25 09:01:04 +00:00