text-generation-inference/backends/gaudi/server/text_generation_server
Wang, Yi A 1e56e5fe5c [gaudi] HuggingFaceM4/idefics2-8b issue fix
batch.prefill_cache_indices is reset in generate_token instead of forward, so that position_id could be updated correctly

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-06-12 22:15:33 -07:00
..
adapters Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
layers [gaudi] Move the _update_cos_sin_cache into get_cos_sin (#3254) 2025-06-12 22:31:11 +02:00
models [gaudi] HuggingFaceM4/idefics2-8b issue fix 2025-06-12 22:15:33 -07:00
pb Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
utils fp8 compressed tensors w8a8 support for Gaudi backend (#3242) 2025-05-28 14:54:20 +02:00
__init__.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
cache.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
cli.py [Gaudi] Remove optimum-habana (#3261) 2025-06-12 22:35:36 +02:00
interceptor.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00
server.py Deepseek R1 for Gaudi backend (#3211) 2025-05-19 16:36:39 +02:00
tgi_service.py Fix the crash in default ATTENTION path for Gaudi backend (#3235) 2025-05-20 14:02:32 +02:00
tracing.py Add Gaudi Backend (#3055) 2025-02-28 12:14:58 +01:00