text-generation-inference/server/text_generation_server/utils
Daniël de Kok e0d168ba20 Use GPTQ-Marlin for supported GPTQ configurations (#2111)
GPTQ-Marlin is currently the best-performing kernel for GPTQ models. So
let's use it by default if the kernels are installed, the GPU supports
it, and the kernels support the configuration.

For models generated by `text-generation-server quantize`, use
`sym=False`. This subcommand symmetric quantization since the beginning
and incorrectly reporting the model to be symmetric will use
GPTQ-Marlin (which does not support asymmetric quantization).
2024-09-24 03:57:32 +00:00
..
merges Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
__init__.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
adapter.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
chunks.py server: use chunked inputs 2024-09-24 03:42:29 +00:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-25 15:10:53 +03:00
dist.py Removing IPEX_AVAIL. (#2115) 2024-09-24 03:52:23 +00:00
hub.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
import_utils.py Removing IPEX_AVAIL. (#2115) 2024-09-24 03:52:23 +00:00
log.py v1.3.4 2024-04-22 09:08:34 +03:00
logits_process.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
peft.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
segments.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
sgmv.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
speculate.py chore: formatting 2024-04-18 16:26:00 +03:00
tokens.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
watermark.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
weights.py Use GPTQ-Marlin for supported GPTQ configurations (#2111) 2024-09-24 03:57:32 +00:00