text-generation-inference/server/text_generation_server/layers/attention
Mohit Sharma a35fbdb925
Bug Fix: Sliding Window Attention (#3112)
* (fix) sliding window attention

* (fix) flashinfer

* (typo) collection link

* Add window_size_left param ipex rocm

* Update window size rocm flash decoding

* fix: bump snapshots and improve exceed window test case

* feat: add tests for image types and remove alpha from png

* Upgrading `from_env` to get token from file when necessary + fix
pali_gemma.

* fix: add pillow dependency and bump lock+requirements

* fix: bump org name in gemma3 test

* Fix qwen2.

---------

Co-authored-by: drbh <david.richard.holtz@gmail.com>
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2025-03-18 10:37:33 +01:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00
ipex.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00
kv_cache.py Use kernels from the kernel hub (#2988) 2025-02-10 19:19:25 +01:00
rocm.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00