text-generation-inference/server/text_generation_server/models/custom_modeling
Nicolas Patry 635dde8af9 Prefix caching (#2402)
* Prefix caching WIP

* Fixing prefix attention.

* Fixing flashinfer import.

* Fixing black.

* Fixing medusa (still wrong outputs, but functional).

* Just medusa values now.

* Fixing medusa without prefix caching.

* Fixing prefix caching.

* Medusa requires reshaping.

* Removing the logs.

* Remove router.nix

* Fixup:

- Remove logs
- Disable VLMs (they do not work)
- Disable prefix caching when user wants prefill logprobs.

* Update flake.lock

---------

Co-authored-by: Daniël de Kok <me@danieldk.eu>
2024-09-25 06:10:59 +00:00
..
__init__.py
bloom_modeling.py
clip.py
flash_cohere_modeling.py
flash_dbrx_modeling.py
flash_deepseek_v2_modeling.py
flash_gemma2_modeling.py
flash_gemma_modeling.py
flash_gpt2_modeling.py
flash_gptj_modeling.py
flash_llama_modeling.py
flash_mistral_modeling.py
flash_mixtral_modeling.py
flash_neox_modeling.py
flash_pali_gemma_modeling.py
flash_phi_modeling.py
flash_qwen2_modeling.py
flash_rw_modeling.py
flash_santacoder_modeling.py
flash_starcoder2_modeling.py
idefics2.py
idefics_config.py
idefics_image_processing.py
idefics_modeling.py
idefics_perceiver.py
idefics_processing.py
idefics_vision.py
llava_next.py
mamba_modeling.py
mpt_modeling.py
neox_modeling.py
opt_modeling.py
phi_modeling.py
siglip.py
t5_modeling.py
vlm.py