text-generation-inference/backends/gaudi/server/text_generation_server/models
2025-06-11 23:47:10 -07:00
..
custom_modeling port https://github.com/huggingface/text-generation-inference/pull/3188 to gaudi backend 2025-06-11 23:47:10 -07:00
__init__.py port https://github.com/huggingface/text-generation-inference/pull/3188 to gaudi backend 2025-06-11 23:47:10 -07:00
bloom.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
causal_lm.py Adjust the round_up_seq logic in Gaudi backend (#3224) 2025-05-12 09:58:43 +02:00
flash_causal_lm.py port https://github.com/huggingface/text-generation-inference/pull/3188 to gaudi backend 2025-06-11 23:47:10 -07:00
flash_vlm_causal_lm.py port https://github.com/huggingface/text-generation-inference/pull/3188 to gaudi backend 2025-06-11 23:47:10 -07:00
galactica.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
globals.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
idefics_causal_lm.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
mamba.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
mllama_causal_lm.py port https://github.com/huggingface/text-generation-inference/pull/3188 to gaudi backend 2025-06-11 23:47:10 -07:00
model.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
seq2seq_lm.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
starcoder.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
types.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
vlm_causal_lm.py Fix HF_HUB_OFFLINE=1 for Gaudi backend (#3193) 2025-05-06 10:47:53 +02:00