text-generation-inference/server/text_generation_server/models
Daniël de Kok 59ea38cbca
Simplify the attention function (#2609)
* Simplify the `attention` function

- Use one definition rather than multiple.
- Add `key`/`value` arguments, so that we don't need the
  `PREFILL_IN_KVCACHE` constant.
- Make it kwargs-only (to avoid mixing up the various `Tensor` args).

* Fixup flashinfer support
2024-10-17 10:42:52 +02:00
..
custom_modeling Simplify the attention function (#2609) 2024-10-17 10:42:52 +02:00
__init__.py Support e4m3fn KV cache (#2655) 2024-10-17 10:42:16 +02:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-07-05 10:29:56 +02:00
causal_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flash_causal_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
galactica.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
globals.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
idefics_causal_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
mamba.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
mllama_causal_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
model.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
pali_gemma.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
seq2seq_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
types.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
vlm_causal_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00