text-generation-inference/server/text_generation_server/models
drbh 7e2a7433d3
feat: adds phi model (#1442)
This PR adds basic modeling for phi-2 

run
```bash
text-generation-server \
    serve \
    microsoft/phi-2 \
    --revision 834565c23f9b28b96ccbeabe614dd906b6db551a
```


test
```bash
curl -s localhost:3000/generate \
   -X POST \
   -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
   -H 'Content-Type: application/json' | jq .
# {
#   "generated_text": "\nDeep learning is a subset of machine learning that uses artificial neural networks to learn from data. These"
# }
```



notes 
- recently (~1 day ago) the Phi weights and model were updated to
accommodate adding [GQA/MQA attention to the
model.](https://github.com/huggingface/transformers/pull/28163) This
impl expects the original model format so a fixed revision is required
at the moment.
- this PR only includes a basic implementation of the model and can
later be extended for support Flash and Sharded versions as well as make
use of better optimization
2024-01-25 15:37:53 +01:00
..
custom_modeling feat: adds phi model (#1442) 2024-01-25 15:37:53 +01:00
__init__.py feat: adds phi model (#1442) 2024-01-25 15:37:53 +01:00
bloom.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
cache_manager.py feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
causal_lm.py feat: add more latency metrics in forward (#1346) 2023-12-14 15:59:38 +01:00
flash_causal_lm.py feat: add more latency metrics in forward (#1346) 2023-12-14 15:59:38 +01:00
flash_llama.py Fix local load for Medusa (#1420) 2024-01-10 18:36:20 +01:00
flash_mistral.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
flash_mixtral.py chore: formatting 2023-12-11 14:49:52 +01:00
flash_neox.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
flash_phi.py feat: adds phi model (#1442) 2024-01-25 15:37:53 +01:00
flash_rw.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
flash_santacoder.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
galactica.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
gpt_neox.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
idefics_causal_lm.py feat: add more latency metrics in forward (#1346) 2023-12-14 15:59:38 +01:00
idefics.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
model.py fix: fix logic if sliding window key is not present in config (#1352) 2023-12-15 14:56:17 +01:00
mpt.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
opt.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
phi.py feat: adds phi model (#1442) 2024-01-25 15:37:53 +01:00
rw.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
santacoder.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
seq2seq_lm.py feat: add more latency metrics in forward (#1346) 2023-12-14 15:59:38 +01:00
t5.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
types.py chore: formatting 2023-12-11 14:49:52 +01:00