text-generation-inference/server/text_generation_server/layers/attention
Daniël de Kok db922eb77e
Update to attention-kernels 0.2.0 (#2950)
This version removes our patches/custom API. Makes it simpler to
get changes from upstream. One of which is that we can enable FP8
KV cache for paged attention as well.
2025-01-27 11:42:36 +01:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Update to attention-kernels 0.2.0 (#2950) 2025-01-27 11:42:36 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py flashinfer: switch to plan API (#2904) 2025-01-17 18:18:02 +01:00
ipex.py Flash decoding kernel adding and prefill-chunking and prefix caching enabling in intel cpu/xpu (#2815) 2025-01-17 12:04:57 +01:00
kv_cache.py Update to attention-kernels 0.2.0 (#2950) 2025-01-27 11:42:36 +01:00
rocm.py Add fp8 kv cache for ROCm (#2856) 2025-01-17 18:43:29 +05:30