text-generation-inference/server/text_generation_server/layers/attention
Mohit Sharma 880ab9c2f3
Add Flash decoding kernel ROCm (#2855)
* (vllm) updated vllm rocm kernels

* revert silu

* update partition size

* remove grouped_topk

* (nit) remove log

* add flash decoding
2025-01-13 11:12:35 +01:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
ipex.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
kv_cache.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
rocm.py Add Flash decoding kernel ROCm (#2855) 2025-01-13 11:12:35 +01:00