diff --git a/README.md b/README.md index 869cc668..0ba49675 100644 --- a/README.md +++ b/README.md @@ -252,6 +252,8 @@ You can also quantize the weights with bitsandbytes to reduce the VRAM requireme make run-falcon-7b-instruct-quantize ``` +4bit quantization is available using the [NF4 and FP4 data types from bitsandbytes](https://arxiv.org/pdf/2305.14314.pdf). It can be enabled by providing `--quantize bitsandbytes-nf4` or `--quantize bitsandbytes-fp4` as a command line argument to `text-generation-launcher`. + ## Develop ```shell diff --git a/launcher/src/main.rs b/launcher/src/main.rs index 35872867..b957837a 100644 --- a/launcher/src/main.rs +++ b/launcher/src/main.rs @@ -124,7 +124,8 @@ struct Args { num_shard: Option, /// Whether you want the model to be quantized. This will use `bitsandbytes` for - /// quantization on the fly, or `gptq`. + /// quantization on the fly, or `gptq`. 4bit quantization is available through + /// `bitsandbytes` by providing the `bitsandbytes-fp4` or `bitsandbytes-nf4` options. #[clap(long, env, value_enum)] quantize: Option,