text-generation-inference/server/text_generation_server/layers/attention
Wang, Yi 938a7f3c3a hotfix: fix regression of attention api change in intel platform (#2439)
fix regression caused by attention api change. ipex.varlen_attention does not support paged-cache
format kv input now.

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2024-09-25 06:13:36 +00:00
..
__init__.py Prefix caching (#2402) 2024-09-25 06:10:59 +00:00
common.py Lots of improvements (Still 2 allocators) (#2449) 2024-09-25 06:13:11 +00:00
cuda.py Lots of improvements (Still 2 allocators) (#2449) 2024-09-25 06:13:11 +00:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
flashinfer.py Prefix caching (#2402) 2024-09-25 06:10:59 +00:00
ipex.py hotfix: fix regression of attention api change in intel platform (#2439) 2024-09-25 06:13:36 +00:00
rocm.py Using an enum for flash backens (paged/flashdecoding/flashinfer) (#2385) 2024-09-25 06:04:51 +00:00