text-generation-inference/integration-tests/models
drbh b2fc097b2b feat: adds phi model (#1442)
This PR adds basic modeling for phi-2

run
```bash
text-generation-server \
    serve \
    microsoft/phi-2 \
    --revision 834565c23f9b28b96ccbeabe614dd906b6db551a
```

test
```bash
curl -s localhost:3000/generate \
   -X POST \
   -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
   -H 'Content-Type: application/json' | jq .
```

notes
- recently (~1 day ago) the Phi weights and model were updated to
accommodate adding [GQA/MQA attention to the
model.](https://github.com/huggingface/transformers/pull/28163) This
impl expects the original model format so a fixed revision is required
at the moment.
- this PR only includes a basic implementation of the model and can
later be extended for support Flash and Sharded versions as well as make
use of better optimization
2024-04-22 13:06:38 +03:00
..
__snapshots__ feat: adds phi model (#1442) 2024-04-22 13:06:38 +03:00
test_bloom_560m_sharded.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
test_bloom_560m.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
test_flash_awq_sharded.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
test_flash_awq.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
test_flash_falcon.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
test_flash_llama_gptq.py feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
test_flash_llama.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
test_flash_medusa.py chore: formatting 2024-04-18 16:26:00 +03:00
test_flash_mistral.py chore: formatting 2024-04-18 16:26:00 +03:00
test_flash_neox_sharded.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
test_flash_neox.py feat(server): add paged attention to flash models (#516) 2023-06-30 19:09:59 +02:00
test_flash_phi.py feat: adds phi model (#1442) 2024-04-22 13:06:38 +03:00
test_flash_santacoder.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
test_flash_starcoder_gptq.py Make GPTQ test less flaky (#1295) 2023-11-28 21:22:35 +01:00
test_flash_starcoder.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
test_idefics.py chore: formatting 2024-04-18 16:26:00 +03:00
test_mpt.py feat(server): Add Non flash MPT. (#514) 2023-07-03 13:01:46 +02:00
test_mt0_base.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
test_neox_sharded.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
test_neox.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
test_t5_sharded.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00