text-generation-inference/server/text_generation_server/models
Daniël de Kok f586cc7f0c Add support for prefix caching to the v3 router (#2392)
This change adds support for prefix caching to the v3 router. This
is broken up from the backend support to ease reviewing.

For now prefix caching is only enabled with `USE_PREFIX_CACHING=1`
in this case, the router will switch to `RadixAllocator`. This
allocator uses a radix trie to keep track of prefills that were
seen prior. If a new prefill is a prefix of a previously-seen
prefil, the router will send a request with `prefix_len>0`, which
can be used by the backend to decide to reuse KV blocks from the
cache, rather than recomputing them.

Even though backend support is not added in this PR, the backend
will still work with prefix caching enabled. The prefix lengths
are just ignored and not used.
2024-09-25 06:05:08 +00:00
..
custom_modeling fix: prefer hidden_activation over hidden_act in gemma2 (#2381) 2024-09-25 05:55:39 +00:00
__init__.py Update documentation for Supported models (#2386) 2024-09-25 06:04:51 +00:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
causal_lm.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
flash_causal_lm.py Using an enum for flash backens (paged/flashdecoding/flashinfer) (#2385) 2024-09-25 06:04:51 +00:00
galactica.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
globals.py Add support for prefix caching to the v3 router (#2392) 2024-09-25 06:05:08 +00:00
idefics_causal_lm.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
idefics.py enable HuggingFaceM4/idefics-9b in intel gpu (#2338) 2024-09-25 05:55:39 +00:00
mamba.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
model.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
pali_gemma.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
seq2seq_lm.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
types.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
vlm_causal_lm.py fix crash in multi-modal (#2245) 2024-09-25 05:39:58 +00:00