text-generation-inference/backends/gaudi/server/text_generation_server/models
Wang, Yi A f0e5faec1a fix some issue
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-28 07:01:06 -07:00
..
custom_modeling fix some issue 2025-03-28 07:01:06 -07:00
__init__.py warmup prefill 2025-03-26 03:10:58 -07:00
bloom.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
causal_lm.py Gaudi: Sync TGI with the latest changes from the TGI-Gaudi fork (#3117) 2025-03-18 09:45:52 +01:00
flash_causal_lm.py fix some issue 2025-03-28 07:01:06 -07:00
flash_vlm_causal_lm.py fix some issue 2025-03-28 07:01:06 -07:00
galactica.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
globals.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
idefics_causal_lm.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
mamba.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
mllama_causal_lm.py fix some issue 2025-03-28 07:01:06 -07:00
model.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
pali_gemma.py multi-modality initial PR 2025-03-19 23:30:12 -07:00
seq2seq_lm.py remove unused quantization code and enable awq/gptq int4 2025-03-22 19:37:20 -07:00
starcoder.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
types.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
vlm_causal_lm.py Merge branch 'main' into gaudi_backend_pa 2025-03-28 00:03:49 -07:00