text-generation-inference/server/text_generation_server/layers/attention
Nicolas Patry b70ae0969f
Prefix caching (#2402)
* Prefix caching WIP

* Fixing prefix attention.

* Fixing flashinfer import.

* Fixing black.

* Fixing medusa (still wrong outputs, but functional).

* Just medusa values now.

* Fixing medusa without prefix caching.

* Fixing prefix caching.

* Medusa requires reshaping.

* Removing the logs.

* Remove router.nix

* Fixup:

- Remove logs
- Disable VLMs (they do not work)
- Disable prefix caching when user wants prefill logprobs.

* Update flake.lock

---------

Co-authored-by: Daniël de Kok <me@danieldk.eu>
2024-08-20 11:15:30 +02:00
..
__init__.py Prefix caching (#2402) 2024-08-20 11:15:30 +02:00
common.py Using an enum for flash backens (paged/flashdecoding/flashinfer) (#2385) 2024-08-09 16:41:17 +02:00
cuda.py Prefix caching (#2402) 2024-08-20 11:15:30 +02:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
flashinfer.py Prefix caching (#2402) 2024-08-20 11:15:30 +02:00
ipex.py Pr 2337 ci branch (#2379) 2024-08-08 12:30:29 -04:00
rocm.py Using an enum for flash backens (paged/flashdecoding/flashinfer) (#2385) 2024-08-09 16:41:17 +02:00