text-generation-inference/integration-tests/models/__snapshots__/test_flash_mixtral_gptq
Daniël de Kok 90a1d04a2f
Add support for GPTQ-quantized MoE models using MoE Marlin (#2557)
This change add support for MoE models that use GPTQ quantization.
Currently only models with the following properties are supported:

- No `desc_act` with tensor parallelism, unless `group_size=-1`.
- No asymmetric quantization.
- No AWQ.
2024-09-30 11:14:32 +02:00
..
test_flash_mixtral_gptq_all_params.json Add support for GPTQ-quantized MoE models using MoE Marlin (#2557) 2024-09-30 11:14:32 +02:00
test_flash_mixtral_gptq_load.json Add support for GPTQ-quantized MoE models using MoE Marlin (#2557) 2024-09-30 11:14:32 +02:00
test_flash_mixtral_gptq.json Add support for GPTQ-quantized MoE models using MoE Marlin (#2557) 2024-09-30 11:14:32 +02:00