.. |
custom_modeling
|
Warmup gaudi backend (#3172)
|
2025-04-24 09:57:08 +02:00 |
__init__.py
|
Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113)
|
2025-04-14 15:58:13 +02:00 |
bloom.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
causal_lm.py
|
Change HPU warmup logic: seq length should be with exponential growth (#3217)
|
2025-05-10 15:41:18 +02:00 |
flash_causal_lm.py
|
forward and tokenize chooser use the same shape (#3196)
|
2025-05-06 10:49:32 +02:00 |
flash_vlm_causal_lm.py
|
forward and tokenize chooser use the same shape (#3196)
|
2025-05-06 10:49:32 +02:00 |
galactica.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
globals.py
|
Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113)
|
2025-04-14 15:58:13 +02:00 |
idefics_causal_lm.py
|
Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113)
|
2025-04-14 15:58:13 +02:00 |
mamba.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
mllama_causal_lm.py
|
forward and tokenize chooser use the same shape (#3196)
|
2025-05-06 10:49:32 +02:00 |
model.py
|
Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113)
|
2025-04-14 15:58:13 +02:00 |
pali_gemma.py
|
Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113)
|
2025-04-14 15:58:13 +02:00 |
seq2seq_lm.py
|
Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113)
|
2025-04-14 15:58:13 +02:00 |
starcoder.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
types.py
|
Add Gaudi Backend (#3055)
|
2025-02-28 12:14:58 +01:00 |
vlm_causal_lm.py
|
Fix HF_HUB_OFFLINE=1 for Gaudi backend (#3193)
|
2025-05-06 10:47:53 +02:00 |