text-generation-inference/backends/gaudi/server/text_generation_server
Wang, Yi A 9d85ac9485 LLM warmup logic
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-31 23:07:14 -07:00
..
adapters Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
layers remove torch.where to fix incorrect output in hpu graph model 2025-03-31 22:51:54 -07:00
models LLM warmup logic 2025-03-31 23:07:14 -07:00
pb Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
utils remove unused quantization code and enable awq/gptq int4 2025-03-22 19:37:20 -07:00
__init__.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
cache.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
cli.py enable fp8 2025-03-25 05:06:55 -07:00
habana_quantization_env.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
interceptor.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
server.py warmup prefill 2025-03-26 03:10:58 -07:00
tgi_service.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
tracing.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00