add documentation for 4bit quantization options

This commit is contained in:
krzim 2023-07-19 22:10:34 +00:00
parent 4e0d8b2efb
commit c52a5d4456
2 changed files with 4 additions and 1 deletions

View File

@ -239,6 +239,8 @@ You can also quantize the weights with bitsandbytes to reduce the VRAM requireme
make run-bloom-quantize # Requires 8xA100 40GB make run-bloom-quantize # Requires 8xA100 40GB
``` ```
4bit quantization is available using the [NF4 and FP4 data types from bitsandbytes](https://arxiv.org/pdf/2305.14314.pdf). It can be enabled by providing `--quantize bitsandbytes-nf4` or `--quantize bitsandbytes-fp4` as a command line argument to `text-generation-launcher`.
## Develop ## Develop
```shell ```shell

View File

@ -104,7 +104,8 @@ struct Args {
num_shard: Option<usize>, num_shard: Option<usize>,
/// Whether you want the model to be quantized. This will use `bitsandbytes` for /// Whether you want the model to be quantized. This will use `bitsandbytes` for
/// quantization on the fly, or `gptq`. /// quantization on the fly, or `gptq`. 4bit quantization is available through
/// `bitsandbytes` by providing the `bitsandbytes-fp4` or `bitsandbytes-nf4` options.
#[clap(long, env, value_enum)] #[clap(long, env, value_enum)]
quantize: Option<Quantization>, quantize: Option<Quantization>,