.. |
__init__.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
bloom_modeling.py
|
clean cuda/rocm code in hpu backend, enable flat_hpu
|
2025-03-14 01:25:31 -07:00 |
clip.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
flash_cohere_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_dbrx_modeling.py
|
enable dbrx remove some unused code
|
2025-03-19 03:16:41 -07:00 |
flash_deepseek_v2_modeling.py
|
enable all the model. not testet yet
|
2025-03-17 01:26:32 -07:00 |
flash_deepseek_v3_modeling.py
|
enable all the model. not testet yet
|
2025-03-17 01:26:32 -07:00 |
flash_gemma2_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_gemma_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_gpt2_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_gptj_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_llama_modeling.py
|
multi-modality initial PR
|
2025-03-19 23:30:12 -07:00 |
flash_llava_next.py
|
adjust warmup and enable vlm
|
2025-03-20 23:12:52 -07:00 |
flash_mistral_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_mixtral_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_mllama.py
|
adjust warmup and enable vlm
|
2025-03-20 23:12:52 -07:00 |
flash_neox_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_pali_gemma_modeling.py
|
adjust warmup and enable vlm
|
2025-03-20 23:12:52 -07:00 |
flash_phi_modeling.py
|
enable all the model. not testet yet
|
2025-03-17 01:26:32 -07:00 |
flash_phi_moe_modeling.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
flash_qwen2_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_rw_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_santacoder_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
flash_starcoder2_modeling.py
|
add moe support, fix qwen/mistral/mixtral crash
|
2025-03-18 00:45:15 -07:00 |
idefics2.py
|
adjust warmup and enable vlm
|
2025-03-20 23:12:52 -07:00 |
idefics3.py
|
adjust warmup and enable vlm
|
2025-03-20 23:12:52 -07:00 |
idefics_config.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
idefics_image_processing.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
idefics_modeling.py
|
clean cuda/rocm code in hpu backend, enable flat_hpu
|
2025-03-14 01:25:31 -07:00 |
idefics_perceiver.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
idefics_processing.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
idefics_vision.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
llava_next.py
|
Gaudi: Sync TGI with the latest changes from the TGI-Gaudi fork (#3117)
|
2025-03-18 09:45:52 +01:00 |
mamba_modeling.py
|
clean cuda/rocm code in hpu backend, enable flat_hpu
|
2025-03-14 01:25:31 -07:00 |
mllama.py
|
Gaudi: Sync TGI with the latest changes from the TGI-Gaudi fork (#3117)
|
2025-03-18 09:45:52 +01:00 |
qwen2_5_vl.py
|
adjust warmup and enable vlm
|
2025-03-20 23:12:52 -07:00 |
qwen2_vl.py
|
adjust warmup and enable vlm
|
2025-03-20 23:12:52 -07:00 |
siglip.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
vlm.py
|
clean cuda/rocm code in hpu backend, enable flat_hpu
|
2025-03-14 01:25:31 -07:00 |