text-generation-inference/server/text_generation_server
Daniël de Kok c6071749db
Fix mask passed to flashinfer (#3324)
Custom masks are padded to the shape `[batch_size, max_len, max_len]`.
However, flashinfer expects an unpadded mask of the shape
`[sum(q_len[i] * k_len[i] for i in range(batch_size)]`.

This change unpads the custom mask (currently only used by Gemma 3)
to this shape (assuming q_len == k_len, since we only use the custom
mask during prefill).
2025-09-08 13:47:03 -04:00
..
adapters xpu lora support (#3232) 2025-07-02 17:54:25 +02:00
layers Fix mask passed to flashinfer (#3324) 2025-09-08 13:47:03 -04:00
models chore: prepare version 3.3.5 (#3314) 2025-09-02 15:35:42 +02:00
pb chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
utils chore: prepare version 3.3.5 (#3314) 2025-09-02 15:35:42 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Fixing TRTLLM dockerfile. (#2922) 2025-01-20 11:13:46 +01:00
interceptor.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
server.py Chunked Prefill VLM (#3188) 2025-05-06 18:01:59 +02:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-06-25 09:33:01 +02:00