mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-24 16:32:12 +00:00
* Improve support for GPUs with capability < 8 - For models that cannot use flashinfer, use flash-attn v1 + paged attention for models with a compute capability older than 8. - Disable prefix caching when using paged attention. - When using flash-attn v1, pass the key/value, rather than the cache, since v1 cannot use block tables. * nix: add flash-attn-v1 to the server environment * Move disabling prefix caching into the block of exceptions * Capability as `usize`s |
||
---|---|---|
.. | ||
custom_modeling | ||
__init__.py | ||
bloom.py | ||
causal_lm.py | ||
flash_causal_lm.py | ||
galactica.py | ||
globals.py | ||
idefics_causal_lm.py | ||
idefics.py | ||
mamba.py | ||
model.py | ||
pali_gemma.py | ||
seq2seq_lm.py | ||
starcoder.py | ||
types.py | ||
vlm_causal_lm.py |