text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling
2025-06-23 11:15:39 +02:00
..
__init__.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
bloom_modeling.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
clip.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
flash_cohere_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_dbrx_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_deepseek_v2_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_deepseek_v3_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_gemma2_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_gemma3_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_gemma_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_gpt2_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_gptj_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_llama4_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_llama_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_llava_next.py [gaudi] Refine logging for Gaudi warmup (#3222) 2025-06-18 12:34:00 +02:00
flash_mistral_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_mixtral_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_mllama.py [gaudi] Vlm rebase and issue fix in benchmark test (#3263) 2025-06-12 22:26:37 +02:00
flash_neox_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_pali_gemma_modeling.py [gaudi] Vlm rebase and issue fix in benchmark test (#3263) 2025-06-12 22:26:37 +02:00
flash_phi_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_phi_moe_modeling.py Deepseek R1 for Gaudi backend (#3211) 2025-05-19 16:36:39 +02:00
flash_qwen2_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_qwen3_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_qwen3_moe_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_rw_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
flash_santacoder_modeling.py [gaudi] Perf optimization (#3256) 2025-06-11 15:00:21 +02:00
flash_starcoder2_modeling.py [gaudi] Refine rope memory, do not need to keep sin/cos cache per layer (#3274) 2025-06-23 11:15:39 +02:00
idefics2.py [gaudi] Refine logging for Gaudi warmup (#3222) 2025-06-18 12:34:00 +02:00
idefics3.py [gaudi] Refine logging for Gaudi warmup (#3222) 2025-06-18 12:34:00 +02:00
mamba_modeling.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
qwen2_5_vl.py [Gaudi] Remove optimum-habana (#3261) 2025-06-12 22:35:36 +02:00
qwen2_vl.py [gaudi] Vlm rebase and issue fix in benchmark test (#3263) 2025-06-12 22:26:37 +02:00
siglip.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
vlm.py [gaudi] gemma3 text and vlm model intial support. need to add sliding window support later (#3270) 2025-06-19 09:32:34 +02:00