text-generation-inference/backends/gaudi/server/text_generation_server
Yuan Wu 6b6e30a6f6
[gaudi] Fix the Llama-4-Maverick-17B-128E crash issue (#3246)
Signed-off-by: yuanwu <yuan.wu@intel.com>
2025-05-29 11:38:44 +02:00
..
adapters Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
layers fp8 compressed tensors w8a8 support for Gaudi backend (#3242) 2025-05-28 14:54:20 +02:00
models [gaudi] Fix the Llama-4-Maverick-17B-128E crash issue (#3246) 2025-05-29 11:38:44 +02:00
pb Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
utils fp8 compressed tensors w8a8 support for Gaudi backend (#3242) 2025-05-28 14:54:20 +02:00
__init__.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
cache.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
cli.py fp8 compressed tensors w8a8 support for Gaudi backend (#3242) 2025-05-28 14:54:20 +02:00
habana_quantization_env.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
interceptor.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
server.py Deepseek R1 for Gaudi backend (#3211) 2025-05-19 16:36:39 +02:00
tgi_service.py Fix the crash in default ATTENTION path for Gaudi backend (#3235) 2025-05-20 14:02:32 +02:00
tracing.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00