text-generation-inference/server/text_generation_server/layers/attention
Daniël de Kok 630f198624
flashinfer: switch to plan API (#2904)
This change doesn't switch `forward` to `run` yet, since it requires
that we have access to the softmax scale and the logit softcap outside
the model.
2025-01-17 18:18:02 +01:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py flashinfer: switch to plan API (#2904) 2025-01-17 18:18:02 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py flashinfer: switch to plan API (#2904) 2025-01-17 18:18:02 +01:00
ipex.py Flash decoding kernel adding and prefill-chunking and prefix caching enabling in intel cpu/xpu (#2815) 2025-01-17 12:04:57 +01:00
kv_cache.py Add fp8 kv cache for ROCm (#2856) 2025-01-17 18:43:29 +05:30
rocm.py Add fp8 kv cache for ROCm (#2856) 2025-01-17 18:43:29 +05:30