mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
This PR adds basic modeling for phi-2 run ```bash text-generation-server \ serve \ microsoft/phi-2 \ --revision 834565c23f9b28b96ccbeabe614dd906b6db551a ``` test ```bash curl -s localhost:3000/generate \ -X POST \ -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \ -H 'Content-Type: application/json' | jq . # { # "generated_text": "\nDeep learning is a subset of machine learning that uses artificial neural networks to learn from data. These" # } ``` notes - recently (~1 day ago) the Phi weights and model were updated to accommodate adding [GQA/MQA attention to the model.](https://github.com/huggingface/transformers/pull/28163) This impl expects the original model format so a fixed revision is required at the moment. - this PR only includes a basic implementation of the model and can later be extended for support Flash and Sharded versions as well as make use of better optimization |
||
---|---|---|
.. | ||
custom_modeling | ||
__init__.py | ||
bloom.py | ||
cache_manager.py | ||
causal_lm.py | ||
flash_causal_lm.py | ||
flash_llama.py | ||
flash_mistral.py | ||
flash_mixtral.py | ||
flash_neox.py | ||
flash_phi.py | ||
flash_rw.py | ||
flash_santacoder.py | ||
galactica.py | ||
gpt_neox.py | ||
idefics_causal_lm.py | ||
idefics.py | ||
model.py | ||
mpt.py | ||
opt.py | ||
phi.py | ||
rw.py | ||
santacoder.py | ||
seq2seq_lm.py | ||
t5.py | ||
types.py |