text-generation-inference/server/text_generation_server/layers/attention
OlivierDehaene a6a0c97ed9
feat: prefill chunking (#2600)
* wip

* rollback

* refactor to use prefix/postfix namming + fix all_input_ids_tensor

* maybe patching vlms?

* fix filter and concat

* wip, no filter, no concat

* current

* add prepare_for_prefill

* working

* load tested

* re-create slots

* re-create slots

* fix slot_filtering_indices

* feedback loop

* remove log

* fix benchmarker

* fix vlm and seq2seq

* rename to cache and input lengths

* fix prefill logprobs

* fix launcher

* fix logprobs?

* idk at this point

* max input length

* omfg

* remove debugging lines

* fix tests

* fix mllama

* fix cargo tests

* remove support chunking for paged

* Fixing non blocked attentions

* Fixing dtype + AMD, Ipex targets.

* lint fix.

* rename

* Fix prefix_caching variable, remove defaults in server (confusing a lot
of the times).

* Add simple resolution when user specifies ATTENTION=paged.

* Put back non default simple tests.

* Fix env name

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2024-10-16 12:49:33 +02:00
..
__init__.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py flashinfer: pass window size and dtype (#2574) 2024-09-28 18:41:41 +02:00
ipex.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
kv_cache.py Upgrade minor rust version (Fixes rust build compilation cache) (#2617) 2024-10-08 09:42:50 +02:00
rocm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00