.. |
custom_modeling
|
Merge branch 'main' into gaudi_backend_pa
|
2025-03-19 18:15:08 -07:00 |
__init__.py
|
Merge branch 'main' into gaudi_backend_pa
|
2025-03-19 18:15:08 -07:00 |
bloom.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
causal_lm.py
|
Gaudi: Sync TGI with the latest changes from the TGI-Gaudi fork (#3117)
|
2025-03-18 09:45:52 +01:00 |
flash_causal_lm.py
|
use tensor cache in hpu graph to avoid replay issue
|
2025-03-17 01:36:49 -07:00 |
galactica.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
globals.py
|
clean cuda/rocm code in hpu backend, enable flat_hpu
|
2025-03-14 01:25:31 -07:00 |
idefics_causal_lm.py
|
clean cuda/rocm code in hpu backend, enable flat_hpu
|
2025-03-14 01:25:31 -07:00 |
mamba.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
mllama_causal_lm.py
|
clean cuda/rocm code in hpu backend, enable flat_hpu
|
2025-03-14 01:25:31 -07:00 |
model.py
|
clean cuda/rocm code in hpu backend, enable flat_hpu
|
2025-03-14 01:25:31 -07:00 |
pali_gemma.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
seq2seq_lm.py
|
clean cuda/rocm code in hpu backend, enable flat_hpu
|
2025-03-14 01:25:31 -07:00 |
starcoder.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
types.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
vlm_causal_lm.py
|
Gaudi: Sync TGI with the latest changes from the TGI-Gaudi fork (#3117)
|
2025-03-18 09:45:52 +01:00 |