text-generation-inference/backends/gaudi/server/text_generation_server/models
Wang, Yi A 2074d0516b enable dbrx remove some unused code
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-19 03:16:41 -07:00
..
custom_modeling enable dbrx remove some unused code 2025-03-19 03:16:41 -07:00
__init__.py enable dbrx remove some unused code 2025-03-19 03:16:41 -07:00
bloom.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
causal_lm.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
flash_causal_lm.py use tensor cache in hpu graph to avoid replay issue 2025-03-17 01:36:49 -07:00
galactica.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
globals.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
idefics_causal_lm.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
mamba.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
mllama_causal_lm.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
model.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
pali_gemma.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
seq2seq_lm.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
starcoder.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
types.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
vlm_causal_lm.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00