mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-20 06:12:07 +00:00
# What does this PR do? This adds a non flash version of MPT. Flash is harder because we need to create a bias ready cuda kernel of flash attention. Fixes https://github.com/huggingface/text-generation-inference/issues/361 Fixes https://github.com/huggingface/text-generation-inference/issues/491 Fixes https://github.com/huggingface/text-generation-inference/issues/290 |
||
---|---|---|
.. | ||
__init__.py | ||
bloom_modeling.py | ||
flash_llama_modeling.py | ||
flash_neox_modeling.py | ||
flash_rw_modeling.py | ||
flash_santacoder_modeling.py | ||
mpt_modeling.py | ||
neox_modeling.py | ||
opt_modeling.py | ||
t5_modeling.py |