text-generation-inference/server/text_generation_server
Nick Hill 34bca0b8d3
fix(server): Small tidy of code from recent changes (#251)
remaining_decode_tokens was calculated twice in Seq2SeqLMBatch.filter()
2023-04-27 09:57:28 +02:00
..
models fix(server): Small tidy of code from recent changes (#251) 2023-04-27 09:57:28 +02:00
pb feat(server): clear cache on error (#143) 2023-03-28 11:29:35 +02:00
utils feat(server): support OPT models (#55) 2023-04-11 19:16:41 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py feat(server): clear cache on error (#143) 2023-03-28 11:29:35 +02:00
cli.py fix(docker): fix docker image dependencies (#187) 2023-04-17 00:26:47 +02:00
interceptor.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
server.py feat(router): new healthcheck that skips the queue (#244) 2023-04-26 20:23:54 +02:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00