text-generation-inference/server/text_generation_server/models
Mohit Sharma 329f612e55
Chunked Prefill VLM (#3188)
* add logic

* working

* add encoder cache free

* fixes

* fix idefics

* update pixel_values

* add improvements

* add improvements

* improve

* nit

* fix inputs_embeds

* nit

* optimizations

* add prometheus port

* rename vars

* rename vars

* nit

* disable chunking for qwen

* review comments

* remove port

* improve headdim

* remove kwargs and redundant args

* fix qwen2_5

* fix config image_token_id error

* fix test

* update paligemma

* fix paligemma text

* minor fix

* fix qwen test

* fix qwen test
2025-05-06 18:01:59 +02:00
..
custom_modeling Chunked Prefill VLM (#3188) 2025-05-06 18:01:59 +02:00
__init__.py Chunked Prefill VLM (#3188) 2025-05-06 18:01:59 +02:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-07-05 10:29:56 +02:00
causal_lm.py Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
flash_causal_lm.py Chunked Prefill VLM (#3188) 2025-05-06 18:01:59 +02:00
galactica.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
globals.py Put more wiggle room. (#3189) 2025-04-24 17:23:32 +02:00
idefics_causal_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
mamba.py Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
metadata_kernels.py feat: add payload limit (#2726) 2024-11-21 18:20:15 +00:00
mllama_causal_lm.py Chunked Prefill VLM (#3188) 2025-05-06 18:01:59 +02:00
model.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00
seq2seq_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
transformers_flash_causal_lm.py transformers flash llm/vlm enabling in ipex (#3152) 2025-04-15 11:08:01 +02:00
transformers_flash_vlm.py Chunked Prefill VLM (#3188) 2025-05-06 18:01:59 +02:00
types.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
vlm_causal_lm.py Chunked Prefill VLM (#3188) 2025-05-06 18:01:59 +02:00