text-generation-inference/server/text_generation_server/layers/attention
Daniël de Kok 653193a942 Improve support for GPUs with capability < 8 (#2575)
* Improve support for GPUs with capability < 8

- For models that cannot use flashinfer, use flash-attn v1 + paged
  attention for models with a compute capability older than 8.
- Disable prefix caching when using paged attention.
- When using flash-attn v1, pass the key/value, rather than the
  cache, since v1 cannot use block tables.

* nix: add flash-attn-v1 to the server environment

* Move disabling prefix caching into the block of exceptions

* Capability as `usize`s
2024-10-25 09:01:04 +00:00
..
__init__.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
common.py Lots of improvements (Still 2 allocators) (#2449) 2024-09-25 06:13:11 +00:00
cuda.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
flashinfer.py More tensor cores. (#2558) 2024-10-25 09:01:04 +00:00
ipex.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
rocm.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00