mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
This change adds support for Marlin-quantized models. Marlin is an FP16xINT4 matmul kernel, which provides good speedups decoding batches of 16-32 tokens. It supports quantized models with symmetric quantization, groupsize -1 or 128, and 4-bit. Tested with: - Llama 2 - Llama 3 - Phi 3 |
||
---|---|---|
.. | ||
attention | ||
awq | ||
gptq | ||
__init__.py | ||
bnb.py | ||
conv.py | ||
eetq.py | ||
exl2.py | ||
fp8.py | ||
layernorm.py | ||
linear.py | ||
marlin.py | ||
medusa.py | ||
mlp.py | ||
rotary.py | ||
speculative.py | ||
tensor_parallel.py |