mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
Tested with ``` CUDA_VISIBLE_DEVICES=0 text-generation-launcher --model-id TheBloke/Llama-2-7B-Chat-GPTQ --quantize gptq EXLLAMA_VERSION=1 CUDA_VISIBLE_DEVICES=0 text-generation-launcher --model-id TheBloke/Llama-2-7B-Chat-GPTQ --quantize gptq CUDA_VISIBLE_DEVICES="0,1" text-generation-launcher --model-id TheBloke/Llama-2-7B-Chat-GPTQ --quantize gptq ``` all with good and identical results on MI210. --------- Co-authored-by: Felix Marty <felix@hf.co> Co-authored-by: OlivierDehaene <olivier@huggingface.co> Co-authored-by: OlivierDehaene <23298448+OlivierDehaene@users.noreply.github.com> |
||
---|---|---|
.. | ||
awq/quantize | ||
gptq | ||
__init__.py | ||
convert.py | ||
debug.py | ||
dist.py | ||
flash_attn.py | ||
hub.py | ||
import_utils.py | ||
layers.py | ||
log.py | ||
logits_process.py | ||
medusa.py | ||
paged_attention.py | ||
peft.py | ||
speculate.py | ||
tokens.py | ||
watermark.py | ||
weights.py |