text-generation-inference/server/text_generation_server
Daniël de Kok 628d6a13da Add support for exl2 quantization
Mostly straightforward, changes to existing code:

* Wrap quantizer parameters in a small wrapper to avoid passing
  around untyped tuples and needing to repack them as a dict.
* Move scratch space computation to warmup, because we need the
  maximum input sequence length to avoid allocating huge
  scratch buffers that OOM.
2024-09-24 03:19:39 +00:00
..
layers Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
models Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
pb chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
utils Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
interceptor.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
server.py Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00