mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-09-11 04:14:52 +00:00
docs: set tag to 1.1
.
This commit is contained in:
parent
96a982ad8f
commit
2200b7e6c4
@ -8,7 +8,7 @@ Let's say you want to deploy [Falcon-7B Instruct](https://huggingface.co/tiiuae/
|
||||
model=tiiuae/falcon-7b-instruct
|
||||
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
|
||||
|
||||
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.1.1 --model-id $model
|
||||
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.1 --model-id $model
|
||||
```
|
||||
|
||||
<Tip warning={true}>
|
||||
@ -85,7 +85,7 @@ curl 127.0.0.1:8080/generate \
|
||||
To see all possible deploy flags and options, you can use the `--help` flag. It's possible to configure the number of shards, quantization, generation parameters, and more.
|
||||
|
||||
```bash
|
||||
docker run ghcr.io/huggingface/text-generation-inference:1.1.1 --help
|
||||
docker run ghcr.io/huggingface/text-generation-inference:1.1 --help
|
||||
```
|
||||
|
||||
</Tip>
|
||||
|
Loading…
Reference in New Issue
Block a user