text-generation-inference/server/text_generation_server/layers/attention
Wang, Yi 885144166f
Flash decoding kernel adding and prefill-chunking and prefix caching enabling in intel cpu/xpu (#2815)
* flash decoding

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* enable xpu flashdecoding

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* set flashdecoding blocksize as 64

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* enable flashdecoding, prefill chunking and prefix caching

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* add flashdecoding-ipex

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-01-17 12:04:57 +01:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
ipex.py Flash decoding kernel adding and prefill-chunking and prefix caching enabling in intel cpu/xpu (#2815) 2025-01-17 12:04:57 +01:00
kv_cache.py Flash decoding kernel adding and prefill-chunking and prefix caching enabling in intel cpu/xpu (#2815) 2025-01-17 12:04:57 +01:00
rocm.py Enable FP8 Per-Tensor Scales and Integrate Marlin/MoE Kernels Repo for ROCm (#2825) 2025-01-15 11:38:58 +05:30