text-generation-inference/server/text_generation_server/layers/attention
Daniël de Kok 84ab88d843
Support flashinfer for Gemma3 prefill (#3167)
* launcher: ensure correct detection of Gemma 3 head size

* Support flashinfer for Gemma3 prefill

Gemma3 uses bidirectional attention for images. Flashinfer
supports custom masks. Hook up the mask with flashinfer, so that we do
not have to use the slower SDPA implementation for prefills with images.

* Update Gemma3 test outputs

* Fixed unused import
2025-04-17 18:07:41 +02:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py Support flashinfer for Gemma3 prefill (#3167) 2025-04-17 18:07:41 +02:00
ipex.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00
kv_cache.py Use kernels from the kernel hub (#2988) 2025-02-10 19:19:25 +01:00
rocm.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00