text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling
Yuan Wu ded4cb52ac
[Gaudi] Enable Qwen3_moe model (#3244)
Signed-off-by: yuanwu <yuan.wu@intel.com>
2025-06-13 12:03:24 +02:00
..
__init__.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
bloom_modeling.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
clip.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
flash_cohere_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_dbrx_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_deepseek_v2_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_deepseek_v3_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_gemma2_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_gemma_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_gpt2_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_gptj_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_llama4_modeling.py [gaudi] Vlm rebase and issue fix in benchmark test (#3263) 2025-06-12 22:26:37 +02:00
flash_llama_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_llava_next.py [gaudi] Vlm rebase and issue fix in benchmark test (#3263) 2025-06-12 22:26:37 +02:00
flash_mistral_modeling.py [gaudi] HuggingFaceM4/idefics2-8b issue fix (#3264) 2025-06-13 12:00:08 +02:00
flash_mixtral_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_mllama.py [gaudi] Vlm rebase and issue fix in benchmark test (#3263) 2025-06-12 22:26:37 +02:00
flash_neox_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_pali_gemma_modeling.py [gaudi] Vlm rebase and issue fix in benchmark test (#3263) 2025-06-12 22:26:37 +02:00
flash_phi_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_phi_moe_modeling.py Deepseek R1 for Gaudi backend (#3211) 2025-05-19 16:36:39 +02:00
flash_qwen2_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_qwen3_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_qwen3_moe_modeling.py [Gaudi] Enable Qwen3_moe model (#3244) 2025-06-13 12:03:24 +02:00
flash_rw_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_santacoder_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_starcoder2_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
idefics2.py [gaudi] Vlm rebase and issue fix in benchmark test (#3263) 2025-06-12 22:26:37 +02:00
idefics3.py [gaudi] Vlm rebase and issue fix in benchmark test (#3263) 2025-06-12 22:26:37 +02:00
idefics_config.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
idefics_image_processing.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
idefics_modeling.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
idefics_perceiver.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
idefics_processing.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
idefics_vision.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
mamba_modeling.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
qwen2_5_vl.py [Gaudi] Remove optimum-habana (#3261) 2025-06-12 22:35:36 +02:00
qwen2_vl.py [gaudi] Vlm rebase and issue fix in benchmark test (#3263) 2025-06-12 22:26:37 +02:00
siglip.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
vlm.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00