text-generation-inference/server/text_generation
Nicolas Patry b94f30215f
fix(server): Use cleanup_tokenization_spaces=False for lossless decoding (#13)
Fixes #12 in the easiest way I could think of.
2023-01-03 11:07:05 +01:00
..
models fix(server): Use cleanup_tokenization_spaces=False for lossless decoding (#13) 2023-01-03 11:07:05 +01:00
pb feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
__init__.py feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
cache.py feat(server): Support AutoModelForSeq2SeqLM 2022-11-04 18:03:04 +01:00
cli.py feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
server.py feat(server): Support AutoModelForSeq2SeqLM 2022-11-04 18:03:04 +01:00
utils.py fix(server): Fix stop sequences (#11) 2022-12-16 16:03:39 +01:00