text-generation-inference/server/text_generation_server/layers/attention
drbh 688321bcc4 fix: attempt forward on flash attn2 to check hardware support (#2335)
* fix: attempt forward on flash attn2 to check hardware support

* fix: warn window_size_left when using flash attn 1

* fix: prefer version check over test op and avoid window_size_left if not flash attn2

* fix: improve condtional and error message

* fix: update sliding window conditional

* fix: simplify changes and revert model changes

* fix: avoid changing conditional

* fix: typo tweak
2024-09-25 05:55:39 +00:00
..
__init__.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
common.py Move to FlashDecoding instead of PagedAttention kernel. (#1940) 2024-09-24 03:58:13 +00:00
cuda.py fix: attempt forward on flash attn2 to check hardware support (#2335) 2024-09-25 05:55:39 +00:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
ipex.py Unify attention output handling (#2343) 2024-09-25 05:55:39 +00:00
rocm.py Unify attention output handling (#2343) 2024-09-25 05:55:39 +00:00