text-generation-inference/server/text_generation_server/models/custom_modeling
OlivierDehaene a6a0c97ed9
feat: prefill chunking (#2600)
* wip

* rollback

* refactor to use prefix/postfix namming + fix all_input_ids_tensor

* maybe patching vlms?

* fix filter and concat

* wip, no filter, no concat

* current

* add prepare_for_prefill

* working

* load tested

* re-create slots

* re-create slots

* fix slot_filtering_indices

* feedback loop

* remove log

* fix benchmarker

* fix vlm and seq2seq

* rename to cache and input lengths

* fix prefill logprobs

* fix launcher

* fix logprobs?

* idk at this point

* max input length

* omfg

* remove debugging lines

* fix tests

* fix mllama

* fix cargo tests

* remove support chunking for paged

* Fixing non blocked attentions

* Fixing dtype + AMD, Ipex targets.

* lint fix.

* rename

* Fix prefix_caching variable, remove defaults in server (confusing a lot
of the times).

* Add simple resolution when user specifies ATTENTION=paged.

* Put back non default simple tests.

* Fix env name

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2024-10-16 12:49:33 +02:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
clip.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
flash_cohere_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_dbrx_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_deepseek_v2_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_gemma2_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_gemma_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_gpt2_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_gptj_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_llama_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_mistral_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_mixtral_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_neox_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_pali_gemma_modeling.py Mllama flash version (#2585) 2024-10-02 11:22:13 +02:00
flash_phi_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_phi_moe_modeling.py feat: support phi3.5 moe (#2479) 2024-09-30 11:15:09 +02:00
flash_qwen2_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_rw_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_santacoder_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_starcoder2_modeling.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
idefics2.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
idefics_config.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
idefics_image_processing.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_modeling.py enable HuggingFaceM4/idefics-9b in intel gpu (#2338) 2024-08-01 11:08:36 +02:00
idefics_perceiver.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_processing.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_vision.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
llava_next.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
mamba_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
mllama.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
mpt_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
neox_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
opt_modeling.py Fix the prefix for OPT model in opt_modelling.py #2370 (CI RUN) (#2371) 2024-08-07 23:14:02 -04:00
phi_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
siglip.py Fix: don't apply post layernorm in SiglipVisionTransformer (#2459) 2024-08-26 17:04:46 -04:00
t5_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
vlm.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00