Added back CLI md

This commit is contained in:
Merve Noyan 2023-08-10 14:09:34 +03:00 committed by GitHub
parent 58092d33a3
commit 712393a702
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 38 additions and 36 deletions

View File

@ -15,4 +15,6 @@
title: Preparing Model for Serving
- local: basic_tutorials/gated_model_access
title: Serving Private & Gated Models
- local: basic_tutorials/using_cli
title: Using TGI through CLI
title: Tutorials

View File

@ -6,6 +6,7 @@ Text Generation Inference improves the model in several aspects.
TGI supports [bits-and-bytes](https://github.com/TimDettmers/bitsandbytes#bitsandbytes) and [GPT-Q](https://arxiv.org/abs/2210.17323) quantization. To speed up inference with quantization, simply set `quantize` flag to `bitsandbytes` or `gptq` depending on the quantization technique you wish to use. When using GPT-Q quantization, you need to point to one of the models [here](https://huggingface.co/models?search=gptq).
## RoPE Scaling
RoPE scaling can be used to increase the sequence length of the model during the inference time without necessarily fine-tuning it. To enable RoPE scaling, simply pass `--rope-scaling`, `--max-input-length` and `--rope-factors` flags when running through CLI. `--rope-scaling` can take the values `linear` or `dynamic`. If your model is not fine-tuned to a longer sequence length, use `dynamic`. `--rope-factor` is the ratio between the intended max sequence length and the model's original max sequence length. Make sure to pass `--max-input-length` to provide maximum input length for extension.

View File

@ -0,0 +1,35 @@
# Using TGI through CLI
You can use TGI command-line interface (CLI) to download weights, serve and quantize models, or get information on serving parameters.
`text-generation-server` lets you download the model with `download-weights` command like below 👇
```shell
text-generation-server download-weights MODEL_HUB_ID
```
You can also use it to quantize models like below 👇
```shell
text-generation-server quantize MODEL_HUB_ID OUTPUT_DIR
```
You can use `text-generation-launcher` to serve models.
```shell
text-generation-launcher --model-id MODEL_HUB_ID --port 8080
```
There are many options and parameters you can pass to `text-generation-launcher`. The documentation for CLI is kept minimal and intended to rely on self-generating documentation, which can be found by running
```shell
text-generation-launcher --help
```
You can also find it hosted in this [Swagger UI](https://huggingface.github.io/text-generation-inference/).
Same documentation can be found for `text-generation-server`.
```shell
text-generation-server --help
```

View File

@ -19,42 +19,6 @@ If you would like to serve models with custom kernels, run
BUILD_EXTENSIONS=True make install
```
## Running CLI
After installation, you will be able to use `text-generation-server` and `text-generation-launcher`.
`text-generation-server` lets you download the model with `download-weights` command like below 👇
```shell
text-generation-server download-weights MODEL_HUB_ID
```
You can also use it to quantize models like below 👇
```shell
text-generation-server quantize MODEL_HUB_ID OUTPUT_DIR
```
You can use `text-generation-launcher` to serve models.
```shell
text-generation-launcher --model-id MODEL_HUB_ID --port 8080
```
There are many options and parameters you can pass to `text-generation-launcher`. The documentation for CLI is kept minimal and intended to rely on self-generating documentation, which can be found by running
```shell
text-generation-launcher --help
```
You can also find it hosted in this [Swagger UI](https://huggingface.github.io/text-generation-inference/).
Same documentation can be found for `text-generation-server`.
```shell
text-generation-server --help
```
## Local Installation from Source
Before you start, you will need to setup your environment, and install Text Generation Inference. Text Generation Inference is tested on **Python 3.9+**.