text-generation-inference/server/text_generation_server/models
Dong Shin a072660bf5
fix: LlamaTokenizerFast to AutoTokenizer at flash_llama.py (#619)
# What does this PR do?

A few tokenizer_config in huggingface use LlamaTokenizer, so I think I
would have selected `LlamaTokenizer` before.

For a few cases where you're using a llama structure but not a llama
tokenizer, why not make it to call the AutoTokenizer in exception
handling.

In the case of `decapoda-research/llama-7b-hf`, LLamaTokenizer is still
being used in config.json, so it should be called through`
LlamaTokenizer`.
Also, if an exception is thrown by LlamaTokenizer, it will cause
`LlamaTokenzierFast` to be called from AutoTokenizer.


Fixes # 560


## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [x] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?


## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

@Narsil
2023-08-14 14:20:18 +02:00
..
custom_modeling Llama change. (#793) 2023-08-08 13:43:40 +02:00
__init__.py Update __init__.py (#794) 2023-08-08 12:09:51 +02:00
bloom.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
causal_lm.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_causal_lm.py feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
flash_llama.py fix: LlamaTokenizerFast to AutoTokenizer at flash_llama.py (#619) 2023-08-14 14:20:18 +02:00
flash_neox.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
flash_rw.py feat(server): Add native support for PEFT Lora models (#762) 2023-08-03 17:22:45 +02:00
flash_santacoder.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
galactica.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
gpt_neox.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
model.py Fix typing in Model.generate_token (#733) 2023-07-31 14:35:14 +02:00
mpt.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
opt.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
rw.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
santacoder.py Directly load GPTBigCode to specified device (#618) 2023-07-21 11:27:31 +02:00
seq2seq_lm.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
t5.py fix(server): T5 weights names. (#582) 2023-07-12 10:01:42 +02:00
types.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00