text-generation-inference/backends/gaudi/server/text_generation_server
kaixuanliu c94f415af4
Change HPU warmup logic: seq length should be with exponential growth (#3217)
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
Co-authored-by: regisss <15324346+regisss@users.noreply.github.com>
2025-05-10 15:41:18 +02:00
..
adapters Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
layers forward and tokenize chooser use the same shape (#3196) 2025-05-06 10:49:32 +02:00
models Change HPU warmup logic: seq length should be with exponential growth (#3217) 2025-05-10 15:41:18 +02:00
pb Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
utils forward and tokenize chooser use the same shape (#3196) 2025-05-06 10:49:32 +02:00
__init__.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
cache.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
cli.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
habana_quantization_env.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
interceptor.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
server.py Gaudi: clean cuda/rocm code in hpu backend, enable flat_hpu (#3113) 2025-04-14 15:58:13 +02:00
tgi_service.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
tracing.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00