text-generation-inference/backends/gaudi/server/text_generation_server/models
Wang, Yi A bf3987e25e pingpong optimization issue fix
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-04-15 21:56:51 -07:00
..
custom_modeling fix warmup issue for mllama 2025-04-04 20:25:01 -07:00
__init__.py warmup prefill 2025-03-26 03:10:58 -07:00
bloom.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
causal_lm.py Gaudi: Use exponential growth to replace BATCH_BUCKET_SIZE (#3131) 2025-04-03 10:34:53 +02:00
flash_causal_lm.py pingpong optimization issue fix 2025-04-15 21:56:51 -07:00
flash_vlm_causal_lm.py improve performance 2025-04-13 20:00:27 -07:00
galactica.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
globals.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
idefics_causal_lm.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
mamba.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
mllama_causal_lm.py prefill bypass graph 2025-04-15 00:27:07 -07:00
model.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
pali_gemma.py multi-modality initial PR 2025-03-19 23:30:12 -07:00
seq2seq_lm.py remove unused quantization code and enable awq/gptq int4 2025-03-22 19:37:20 -07:00
starcoder.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
types.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
vlm_causal_lm.py Merge branch 'main' into gaudi_backend_pa 2025-03-28 00:03:49 -07:00