mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
GPTQ-Marlin is currently the best-performing kernel for GPTQ models. So let's use it by default if the kernels are installed, the GPU supports it, and the kernels support the configuration. For models generated by `text-generation-server quantize`, use `sym=False`. This subcommand symmetric quantization since the beginning and incorrectly reporting the model to be symmetric will use GPTQ-Marlin (which does not support asymmetric quantization). |
||
---|---|---|
.. | ||
merges | ||
__init__.py | ||
adapter.py | ||
chunks.py | ||
convert.py | ||
dist.py | ||
hub.py | ||
import_utils.py | ||
log.py | ||
logits_process.py | ||
peft.py | ||
segments.py | ||
sgmv.py | ||
speculate.py | ||
tokens.py | ||
watermark.py | ||
weights.py |