text-generation-inference/server/text_generation_server/layers/attention
Mohit Sharma 88e2997b9c style
2024-09-06 12:23:18 +00:00
..
__init__.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
common.py [Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940) 2024-07-01 23:28:00 +02:00
cuda.py added custom PA 2024-09-04 05:46:28 +00:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
ipex.py added custom PA 2024-09-04 05:46:28 +00:00
rocm.py style 2024-09-06 12:23:18 +00:00