From 2a13f1a04682f43e48aea1f2378d1e32ee726256 Mon Sep 17 00:00:00 2001 From: Ikko Eltociear Ashimine Date: Mon, 31 Jul 2023 22:43:44 +0900 Subject: [PATCH] chore: fix typo in mpt_modeling.py (#737) # What does this PR do? Fixed typo. implemetation -> implementation ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. --- .../models/custom_modeling/mpt_modeling.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/server/text_generation_server/models/custom_modeling/mpt_modeling.py b/server/text_generation_server/models/custom_modeling/mpt_modeling.py index e60571168..5ccf796df 100644 --- a/server/text_generation_server/models/custom_modeling/mpt_modeling.py +++ b/server/text_generation_server/models/custom_modeling/mpt_modeling.py @@ -297,7 +297,7 @@ def triton_flash_attn_fn( class MultiheadAttention(nn.Module): """Multi-head self attention. - Using torch or triton attention implemetation enables user to also use + Using torch or triton attention implementation enables user to also use additive bias. """ @@ -386,7 +386,7 @@ class MultiheadAttention(nn.Module): class MultiQueryAttention(nn.Module): """Multi-Query self attention. - Using torch or triton attention implemetation enables user to also use + Using torch or triton attention implementation enables user to also use additive bias. """