text-generation-inference/docs/source/conceptual
Daniël de Kok 628d6a13da Add support for exl2 quantization
Mostly straightforward, changes to existing code:

* Wrap quantizer parameters in a small wrapper to avoid passing
  around untyped tuples and needing to repack them as a dict.
* Move scratch space computation to warmup, because we need the
  maximum input sequence length to avoid allocating huge
  scratch buffers that OOM.
2024-09-24 03:19:39 +00:00
..
flash_attention.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
guidance.md Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
paged_attention.md Paged Attention Conceptual Guide (#901) 2023-09-08 14:18:42 +02:00
quantization.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
safetensors.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
speculation.md feat: add train medusa head tutorial (#1934) 2024-07-17 05:36:58 +00:00
streaming.md fix typos in docs and add small clarifications (#1790) 2024-06-10 09:24:52 +03:00
tensor_parallelism.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00