text-generation-inference/server/text_generation_server
SeongBeomLEE 097e72a672 fix: LlamaTokenizerFast to AutoTokenizer at flash_mistral.py (#1637)
# What does this PR do?

A few cases where you're using a mistral structure or mixtral structure
but not a llama tokenizer, why not make it to call the AutoTokenizer in
exception handling.

Similar PR #619

@Narsil
2024-04-25 12:35:44 +03:00
..
models fix: LlamaTokenizerFast to AutoTokenizer at flash_mistral.py (#1637) 2024-04-25 12:35:44 +03:00
pb chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
utils fix: improve tool type, bump pydantic and outlines (#1650) 2024-04-25 12:34:55 +03:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-04-25 09:13:03 +03:00
habana_quantization_env.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
interceptor.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
server.py fix: fix gpt-q with groupsize = -1 (#1358) 2024-04-19 15:05:50 +03:00
tgi_service.py Speculative (#1308) 2024-04-18 12:39:39 +00:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00