text-generation-inference/server/text_generation_server/layers/attention
Daniël de Kok a9c7d2e3b6
Basic flashinfer 0.2 support (#2862)
* Basic flashinfer 0.2 support

This change does not use any of the new features yet, but makes
some small compatibility changes.

* Update to flashinfer 0.2.0.post1

* flashinfer: remove `contiguous` calls

* Fix flashinfer install

* flashinfer: fixup kv cache dtype

* Fix some annoying perturbations

* More output changes
2025-01-09 16:25:00 +01:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
ipex.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
kv_cache.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
rocm.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00