text-generation-inference/server/text_generation_server
2023-07-20 17:23:49 +02:00
..
models fix(server): use mem_get_info to get kv cache size (#664) 2023-07-20 17:23:49 +02:00
pb feat(server): clear cache on error (#143) 2023-03-28 11:29:35 +02:00
utils fix(server): use mem_get_info to get kv cache size (#664) 2023-07-20 17:23:49 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py feat(server): Reworking the quantization script so it's still universal (not llama specific) (#587) 2023-07-18 12:19:05 +02:00
interceptor.py feat(server): empty cache on errors 2023-07-12 17:06:19 +02:00
server.py feat(server): auto max_batch_total_tokens for flash att models (#630) 2023-07-19 09:31:25 +02:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00