text-generation-inference/backends/gaudi/server/text_generation_server/models
Wang, Yi A 6bbe24d974 use tensor cache in hpu graph to avoid replay issue
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-17 01:36:49 -07:00
..
custom_modeling enable all the model. not testet yet 2025-03-17 01:26:32 -07:00
__init__.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
bloom.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
causal_lm.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
flash_causal_lm.py use tensor cache in hpu graph to avoid replay issue 2025-03-17 01:36:49 -07:00
galactica.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
globals.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
idefics_causal_lm.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
mamba.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
mllama_causal_lm.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
model.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
pali_gemma.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
seq2seq_lm.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
starcoder.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
types.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
vlm_causal_lm.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00