mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
# What does this PR do? Some models are already converted, and do not have those values in the file, this enables users to use them with less friction. Went for pure env based because adding flags would end up (imo) very tedious to maintain. There's a lot of sanitation to do: those flags would be errors if not used in conjuction with `--quantize gptq`. Then the flags need to exist in the launcher and the server passing them all throughout all function calls. This PR is intended as an easy escape hatch, not the defacto method to use gptq in TGI. Fixes #500 |
||
---|---|---|
.. | ||
gptq | ||
__init__.py | ||
convert.py | ||
dist.py | ||
hub.py | ||
layers.py | ||
logits_process.py | ||
tokens.py | ||
watermark.py | ||
weights.py |