text-generation-inference/backends/gaudi/server/text_generation_server/layers/attention
Wang, Yi A c55a8caea2 remove torch.where to fix incorrect output in hpu graph model
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-31 22:51:54 -07:00
..
__init__.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
common.py clean cuda/rocm code in hpu backend, enable flat_hpu 2025-03-14 01:25:31 -07:00
hpu.py remove block_tables and prefill_cache_indices which will lead to dynamic shape 2025-03-27 23:57:59 -07:00
kv_cache.py remove torch.where to fix incorrect output in hpu graph model 2025-03-31 22:51:54 -07:00