text-generation-inference/server/text_generation_server/layers/attention
2024-06-24 08:15:36 +00:00
..
__init__.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
cuda.py rebase and update 2024-06-24 08:15:36 +00:00
flash_attn_triton.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
rocm.py rebase and update 2024-06-24 08:15:36 +00:00
xpu.py rebase and update 2024-06-24 08:15:36 +00:00