text-generation-inference/server/text_generation_server/models
Dean Wyatte 13c62be467
GPTNeoX: Use static rotary embedding (#1498)
# What does this PR do?

`transformers` 4.35 removed rotary embeddings from GPTNeoX's weights
([link to line
diff](253f9a3f97 (diff-0e2a05d86c82e96f516db8c14070ceb36f53ca44c6bc21a9cd92ad2e777b9cf1R298))).
This applies the same fix as
https://github.com/huggingface/text-generation-inference/pull/793 which
generates them on-the-fly using the appropriate value from the config
file

Fixes
https://github.com/huggingface/text-generation-inference/issues/1460

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [x] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?


## Who can review?

@OlivierDehaene OR @Narsil
2024-02-01 09:34:11 +01:00
..
custom_modeling GPTNeoX: Use static rotary embedding (#1498) 2024-02-01 09:34:11 +01:00
__init__.py v1.4.0 (#1494) 2024-01-26 19:04:57 +01:00
bloom.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
cache_manager.py feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
causal_lm.py Fixing top_n_tokens. (#1497) 2024-01-26 20:13:47 +01:00
flash_causal_lm.py Fixing top_n_tokens. (#1497) 2024-01-26 20:13:47 +01:00
flash_llama.py v1.4.0 (#1494) 2024-01-26 19:04:57 +01:00
flash_mistral.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
flash_mixtral.py chore: formatting 2023-12-11 14:49:52 +01:00
flash_neox.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
flash_phi.py v1.4.0 (#1494) 2024-01-26 19:04:57 +01:00
flash_rw.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
flash_santacoder.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
galactica.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
gpt_neox.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
idefics_causal_lm.py feat: add more latency metrics in forward (#1346) 2023-12-14 15:59:38 +01:00
idefics.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
model.py fix: fix logic if sliding window key is not present in config (#1352) 2023-12-15 14:56:17 +01:00
mpt.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
opt.py fix: fix gpt-q params loading 2023-12-14 11:02:16 +01:00
phi.py v1.4.0 (#1494) 2024-01-26 19:04:57 +01:00
rw.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
santacoder.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
seq2seq_lm.py Fixing top_n_tokens. (#1497) 2024-01-26 20:13:47 +01:00
t5.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
types.py Fixing top_n_tokens. (#1497) 2024-01-26 20:13:47 +01:00