text-generation-inference/server/text_generation_server/utils
Daniël de Kok 77ac0f364b Add support for Marlin-quantized models
This change adds support for Marlin-quantized models. Marlin is an
FP16xINT4 matmul kernel, which provides good speedups decoding batches
of 16-32 tokens. It supports quantized models with symmetric
quantization, groupsize -1 or 128, and 4-bit.

Tested with:

- Llama 2
- Llama 3
- Phi 3
2024-09-24 03:38:05 +00:00
..
__init__.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-25 15:10:53 +03:00
dist.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
hub.py Fixing the download strategy for ibm-fms (#1917) 2024-07-17 05:36:58 +00:00
import_utils.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
log.py v1.3.4 2024-04-22 09:08:34 +03:00
logits_process.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
peft.py fix: fix local loading for .bin models (#1419) 2024-04-22 09:17:52 +03:00
speculate.py chore: formatting 2024-04-18 16:26:00 +03:00
tokens.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
watermark.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
weights.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00