text-generation-inference/server/text_generation_server/layers/attention
2025-03-14 07:47:45 +00:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py (fix) flashinfer 2025-03-13 21:32:38 +00:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py (fix) sliding window attention 2025-03-13 19:43:00 +00:00
ipex.py Add window_size_left param ipex rocm 2025-03-14 07:47:45 +00:00
kv_cache.py Use kernels from the kernel hub (#2988) 2025-02-10 19:19:25 +01:00
rocm.py Add window_size_left param ipex rocm 2025-03-14 07:47:45 +00:00