text-generation-inference/server/text_generation_server/layers/moe
Wang, Yi 1d3c9beba8
fix moe in quantization path (#2935)
update ipex xpu to support moe for mixtral

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-01-22 14:36:15 +01:00
..
__init__.py fix moe in quantization path (#2935) 2025-01-22 14:36:15 +01:00
fused_moe_ipex.py fix moe in quantization path (#2935) 2025-01-22 14:36:15 +01:00
gptq_marlin.py Add support for fused MoE Marlin for AWQ (#2616) 2024-10-08 11:56:41 +02:00
unquantized.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00