text-generation-inference/backends/gaudi/server/text_generation_server/models
Wang, Yi A 9914ffe1f1 remove unused quantization code and enable awq/gptq int4
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-22 19:37:20 -07:00
..
custom_modeling remove unused quantization code and enable awq/gptq int4 2025-03-22 19:37:20 -07:00
__init__.py multi-modality initial PR 2025-03-19 23:30:12 -07:00
bloom.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
causal_lm.py Gaudi: Sync TGI with the latest changes from the TGI-Gaudi fork (#3117) 2025-03-18 09:45:52 +01:00
flash_causal_lm.py adjust warmup and enable vlm 2025-03-20 23:12:52 -07:00
flash_vlm_causal_lm.py adjust warmup and enable vlm 2025-03-20 23:12:52 -07:00
galactica.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
globals.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
idefics_causal_lm.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
mamba.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
mllama_causal_lm.py adjust warmup and enable vlm 2025-03-20 23:12:52 -07:00
model.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
pali_gemma.py multi-modality initial PR 2025-03-19 23:30:12 -07:00
seq2seq_lm.py remove unused quantization code and enable awq/gptq int4 2025-03-22 19:37:20 -07:00
starcoder.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
types.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
vlm_causal_lm.py adjust warmup and enable vlm 2025-03-20 23:12:52 -07:00