New release 2.3.1 (#2604)

* New release 2.3.1

* Update doc number
This commit is contained in:
Nicolas Patry 2024-10-03 14:43:49 +02:00 committed by yuanwu
parent 902f526d69
commit 34e98b14ef
7 changed files with 38 additions and 22 deletions

View File

@ -20,7 +20,7 @@ default-members = [
resolver = "2"
[workspace.package]
version = "2.3.1-dev0"
version = "2.3.2-dev0"
edition = "2021"
authors = ["Olivier Dehaene"]
homepage = "https://github.com/huggingface/text-generation-inference"

View File

@ -21,11 +21,27 @@ limitations under the License.
- [Text Generation Inference on Habana Gaudi](#text-generation-inference-on-habana-gaudi)
- [Table of contents](#table-of-contents)
- [Running TGI on Gaudi](#running-tgi-on-gaudi)
- [TGI-Gaudi Benchmark](#tgi-gaudi-benchmark)
- [Static Batching Benchmark](#static-batching-benchmark)
- [Continuous Batching Benchmark](#continuous-batching-benchmark)
- [Tested Models and Configurations](#tested-models-and-configurations)
- [Running TGI with BF16 Precision](#running-tgi-with-bf16-precision)
- [Llama2-7B on 1 Card](#llama2-7b-on-1-card)
- [Llama2-70B on 8 cards](#llama2-70b-on-8-cards)
- [Llama3.1-8B on 1 card](#llama31-8b-on-1-card)
- [Llama3.1-70B 8 cards](#llama31-70b-8-cards)
- [Llava-v1.6-Mistral-7B on 1 card](#llava-v16-mistral-7b-on-1-card)
- [Running TGI with FP8 Precision](#running-tgi-with-fp8-precision)
- [Llama2-7B on 1 Card](#llama2-7b-on-1-card-1)
- [Llama2-70B on 8 Cards](#llama2-70b-on-8-cards-1)
- [Llama3.1-8B on 1 Card](#llama31-8b-on-1-card-1)
- [Llama3.1-70B on 8 cards](#llama31-70b-on-8-cards)
- [Llava-v1.6-Mistral-7B on 1 Card](#llava-v16-mistral-7b-on-1-card-1)
- [Llava-v1.6-Mistral-7B on 8 Cards](#llava-v16-mistral-7b-on-8-cards)
- [Adjusting TGI Parameters](#adjusting-tgi-parameters)
- [Environment variables](#environment-variables)
- [Environment Variables](#environment-variables)
- [Profiler](#profiler)
- [License](#license)
## Running TGI on Gaudi
@ -33,7 +49,7 @@ To use [🤗 text-generation-inference](https://github.com/huggingface/text-gene
1. Pull the official Docker image with:
```bash
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.5
docker pull ghcr.io/huggingface/tgi-gaudi:2.3.1
```
> [!NOTE]
> Alternatively, you can build the Docker image using the `Dockerfile` located in this folder with:
@ -54,7 +70,7 @@ To use [🤗 text-generation-inference](https://github.com/huggingface/text-gene
-e OMPI_MCA_btl_vader_single_copy_mechanism=none -e HF_TOKEN=$hf_token \
-e ENABLE_HPU_GRAPH=true -e LIMIT_HPU_GRAPH=true -e USE_FLASH_ATTENTION=true \
-e FLASH_ATTENTION_RECOMPUTE=true --cap-add=sys_nice --ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 --model-id $model --max-input-tokens 1024 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 --model-id $model --max-input-tokens 1024 \
--max-total-tokens 2048
```
@ -68,7 +84,7 @@ To use [🤗 text-generation-inference](https://github.com/huggingface/text-gene
-e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none \
-e HF_TOKEN=$hf_token -e ENABLE_HPU_GRAPH=true -e LIMIT_HPU_GRAPH=true \
-e USE_FLASH_ATTENTION=true -e FLASH_ATTENTION_RECOMPUTE=true --cap-add=sys_nice \
--ipc=host ghcr.io/huggingface/tgi-gaudi:2.0.5 --model-id $model --sharded true \
--ipc=host ghcr.io/huggingface/tgi-gaudi:2.3.1 --model-id $model --sharded true \
--num-shard 8 --max-input-tokens 1024 --max-total-tokens 2048
```
3. Wait for the TGI-Gaudi server to come online. You will see something like so:
@ -141,7 +157,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--max-input-length 1024 --max-total-tokens 2048 \
--max-batch-prefill-tokens 2048 --max-batch-total-tokens 65536 \
@ -173,7 +189,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--sharded true --num-shard 8 \
--max-input-length 1024 --max-total-tokens 2048 \
@ -205,7 +221,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--max-input-length 1024 --max-total-tokens 2048 \
--max-batch-prefill-tokens 2048 --max-batch-total-tokens 65536 \
@ -237,7 +253,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--sharded true --num-shard 8 \
--max-input-length 1024 --max-total-tokens 2048 \
@ -269,7 +285,7 @@ docker run -p 8080:80 \
-e BATCH_BUCKET_SIZE=1 \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--max-input-tokens 4096 --max-batch-prefill-tokens 16384 \
--max-total-tokens 8192 --max-batch-total-tokens 32768
@ -320,7 +336,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--max-input-length 1024 --max-total-tokens 2048 \
--max-batch-prefill-tokens 2048 --max-batch-total-tokens 65536 \
@ -355,7 +371,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--sharded true --num-shard 8 \
--max-input-length 1024 --max-total-tokens 2048 \
@ -391,7 +407,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--max-input-length 1024 --max-total-tokens 2048 \
--max-batch-prefill-tokens 2048 --max-batch-total-tokens 65536 \
@ -426,7 +442,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--sharded true --num-shard 8 \
--max-input-length 1024 --max-total-tokens 2048 \
@ -459,7 +475,7 @@ docker run -p 8080:80 \
-e BATCH_BUCKET_SIZE=1 \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--max-input-tokens 4096 --max-batch-prefill-tokens 16384 \
--max-total-tokens 8192 --max-batch-total-tokens 32768
@ -490,7 +506,7 @@ docker run -p 8080:80 \
-e BATCH_BUCKET_SIZE=1 \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.3.1 \
--model-id $model \
--sharded true --num-shard 8 \
--max-input-tokens 4096 --max-batch-prefill-tokens 16384 \

View File

@ -10,7 +10,7 @@
"name": "Apache 2.0",
"url": "https://www.apache.org/licenses/LICENSE-2.0"
},
"version": "2.3.1-dev0"
"version": "2.3.2-dev0"
},
"paths": {
"/": {

View File

@ -11,7 +11,7 @@ volume=$PWD/data # share a volume with the Docker container to avoid downloading
docker run --rm -it --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
--device=/dev/kfd --device=/dev/dri --group-add video \
--ipc=host --shm-size 256g --net host -v $volume:/data \
ghcr.io/huggingface/text-generation-inference:2.3.0-rocm \
ghcr.io/huggingface/text-generation-inference:2.3.1-rocm \
--model-id $model
```

View File

@ -12,7 +12,7 @@ volume=$PWD/data # share a volume with the Docker container to avoid downloading
docker run --rm --privileged --cap-add=sys_nice \
--device=/dev/dri \
--ipc=host --shm-size 1g --net host -v $volume:/data \
ghcr.io/huggingface/text-generation-inference:2.3.0-intel-xpu \
ghcr.io/huggingface/text-generation-inference:2.3.1-intel-xpu \
--model-id $model --cuda-graphs 0
```
@ -29,7 +29,7 @@ volume=$PWD/data # share a volume with the Docker container to avoid downloading
docker run --rm --privileged --cap-add=sys_nice \
--device=/dev/dri \
--ipc=host --shm-size 1g --net host -v $volume:/data \
ghcr.io/huggingface/text-generation-inference:2.3.0-intel-cpu \
ghcr.io/huggingface/text-generation-inference:2.3.1-intel-cpu \
--model-id $model --cuda-graphs 0
```

View File

@ -11,7 +11,7 @@ model=teknium/OpenHermes-2.5-Mistral-7B
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 64g -p 8080:80 -v $volume:/data \
ghcr.io/huggingface/text-generation-inference:2.3.0 \
ghcr.io/huggingface/text-generation-inference:2.3.1 \
--model-id $model
```

View File

@ -11,7 +11,7 @@ model=teknium/OpenHermes-2.5-Mistral-7B
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \
ghcr.io/huggingface/text-generation-inference:2.3.0 \
ghcr.io/huggingface/text-generation-inference:2.3.1 \
--model-id $model
```