text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling
Wang, Yi A 2074d0516b enable dbrx remove some unused code
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-19 03:16:41 -07:00
..
__init__.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
bloom_modeling.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
clip.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
flash_cohere_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_dbrx_modeling.py enable dbrx remove some unused code 2025-03-19 03:16:41 -07:00
flash_deepseek_v2_modeling.py enable all the model. not testet yet 2025-03-17 01:26:32 -07:00
flash_deepseek_v3_modeling.py enable all the model. not testet yet 2025-03-17 01:26:32 -07:00
flash_gemma2_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_gemma_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_gpt2_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_gptj_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_llama_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_mistral_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_mixtral_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_neox_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_pali_gemma_modeling.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
flash_phi_modeling.py enable all the model. not testet yet 2025-03-17 01:26:32 -07:00
flash_phi_moe_modeling.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
flash_qwen2_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_rw_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_santacoder_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
flash_starcoder2_modeling.py add moe support, fix qwen/mistral/mixtral crash 2025-03-18 00:45:15 -07:00
idefics2.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
idefics3.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
idefics_config.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
idefics_image_processing.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
idefics_modeling.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
idefics_perceiver.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
idefics_processing.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
idefics_vision.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
llava_next.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
mamba_modeling.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
mllama.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
qwen2_5_vl.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
qwen2_vl.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
siglip.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
vlm.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00