text-generation-inference/server/text_generation/models
Nicolas Patry b94f30215f
fix(server): Use cleanup_tokenization_spaces=False for lossless decoding (#13)
Fixes #12 in the easiest way I could think of.
2023-01-03 11:07:05 +01:00
..
__init__.py feat(server): Add model tests (#6) 2022-12-08 18:49:33 +01:00
bloom.py feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00
causal_lm.py fix(server): Use cleanup_tokenization_spaces=False for lossless decoding (#13) 2023-01-03 11:07:05 +01:00
galactica.py feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00
model.py fix(batching): Avoid theoretical hang in batcher loop (#5) 2022-12-05 10:10:59 +01:00
seq2seq_lm.py fix(server): Check for device type correctly when determining initial padding (#16) 2022-12-30 19:30:42 +01:00
types.py feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00