text-generation-inference/server/text_generation_server/layers/moe
Daniël de Kok 90a1d04a2f
Add support for GPTQ-quantized MoE models using MoE Marlin (#2557)
This change add support for MoE models that use GPTQ quantization.
Currently only models with the following properties are supported:

- No `desc_act` with tensor parallelism, unless `group_size=-1`.
- No asymmetric quantization.
- No AWQ.
2024-09-30 11:14:32 +02:00
..
__init__.py Add support for GPTQ-quantized MoE models using MoE Marlin (#2557) 2024-09-30 11:14:32 +02:00
fused_moe_rocm.py Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
gptq_marlin.py Add support for GPTQ-quantized MoE models using MoE Marlin (#2557) 2024-09-30 11:14:32 +02:00
unquantized.py Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00