text-generation-inference/server/text_generation_server/layers/moe
Wang, Yi a5ecd6e586
add ipex moe implementation to support Mixtral and PhiMoe (#2707)
* add ipex moe implementation to support Mixtral and PhiMoe

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* update to ipex xpu 2.5

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* torch has xpu support in 2.5

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* fix oneapi basekit version

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* Apply suggestions from code review

Co-authored-by: Daniël de Kok <me@github.danieldk.eu>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
2024-11-18 17:16:55 +01:00
..
__init__.py add ipex moe implementation to support Mixtral and PhiMoe (#2707) 2024-11-18 17:16:55 +01:00
fused_moe_rocm.py Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
gptq_marlin.py Add support for fused MoE Marlin for AWQ (#2616) 2024-10-08 11:56:41 +02:00
unquantized.py add ipex moe implementation to support Mixtral and PhiMoe (#2707) 2024-11-18 17:16:55 +01:00