mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
This PR adds support for AMD Instinct MI210 & MI250 GPUs, with paged attention and FAv2 support. Remaining items to discuss, on top of possible others: * Should we have a `ghcr.io/huggingface/text-generation-inference:1.1.0+rocm` hosted image, or is it too early? * Should we set up a CI on MI210/MI250? I don't have access to the runners of TGI though. * Are we comfortable with those changes being directly in TGI, or do we need a fork? --------- Co-authored-by: Felix Marty <felix@hf.co> Co-authored-by: OlivierDehaene <olivier@huggingface.co> Co-authored-by: Your Name <you@example.com> |
||
---|---|---|
.. | ||
custom_modeling | ||
__init__.py | ||
bloom.py | ||
cache_manager.py | ||
causal_lm.py | ||
flash_causal_lm.py | ||
flash_llama.py | ||
flash_mistral.py | ||
flash_neox.py | ||
flash_rw.py | ||
flash_santacoder.py | ||
galactica.py | ||
gpt_neox.py | ||
idefics_causal_lm.py | ||
idefics.py | ||
model.py | ||
mpt.py | ||
opt.py | ||
rw.py | ||
santacoder.py | ||
seq2seq_lm.py | ||
t5.py | ||
types.py |