text-generation-inference/backends/gaudi/server/text_generation_server/models
regisss f208ba6afc
Fix HF_HUB_OFFLINE=1 for Gaudi backend (#3193)
* Fix `HF_HUB_OFFLINE=1` for Gaudi backend

* Fix HF cache default value in server.rs

* Format
2025-05-06 10:47:53 +02:00
..
custom_modeling Warmup gaudi backend (#3172) 2025-04-24 09:57:08 +02:00
__init__.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
bloom.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
causal_lm.py Fix HF_HUB_OFFLINE=1 for Gaudi backend (#3193) 2025-05-06 10:47:53 +02:00
flash_causal_lm.py Warmup gaudi backend (#3172) 2025-04-24 09:57:08 +02:00
flash_vlm_causal_lm.py Warmup gaudi backend (#3172) 2025-04-24 09:57:08 +02:00
galactica.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
globals.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
idefics_causal_lm.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
mamba.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
mllama_causal_lm.py Warmup gaudi backend (#3172) 2025-04-24 09:57:08 +02:00
model.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
pali_gemma.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
seq2seq_lm.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
starcoder.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
types.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
vlm_causal_lm.py Fix HF_HUB_OFFLINE=1 for Gaudi backend (#3193) 2025-05-06 10:47:53 +02:00