text-generation-inference/server/text_generation_server/layers/attention
2024-08-27 20:06:11 +02:00
..
__init__.py Prefix caching (#2402) 2024-08-20 11:15:30 +02:00
common.py Fixing prefix caching for flashdecoding. 2024-08-27 20:06:11 +02:00
cuda.py Fixing prefix caching for flashdecoding. 2024-08-27 20:06:11 +02:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
flashinfer.py Prefix caching (#2402) 2024-08-20 11:15:30 +02:00
ipex.py Pr 2337 ci branch (#2379) 2024-08-08 12:30:29 -04:00
rocm.py Using an enum for flash backens (paged/flashdecoding/flashinfer) (#2385) 2024-08-09 16:41:17 +02:00