mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-22 15:32:08 +00:00
Remove References to torch compile mode in readme (#236)
This commit is contained in:
parent
8ae5d4c7d6
commit
21c13ff3a6
14
README.md
14
README.md
@ -58,19 +58,7 @@ To use [🤗 text-generation-inference](https://github.com/huggingface/text-gene
|
|||||||
--max-total-tokens 2048
|
--max-total-tokens 2048
|
||||||
```
|
```
|
||||||
|
|
||||||
ii. On 1 Gaudi card using PyTorch eager mode with torch compile:
|
ii. On 8 Gaudi cards:
|
||||||
```bash
|
|
||||||
model=meta-llama/Llama-2-7b-hf
|
|
||||||
hf_token=YOUR_ACCESS_TOKEN
|
|
||||||
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
|
|
||||||
|
|
||||||
docker run -p 8080:80 -v $volume:/data --runtime=habana -e HABANA_VISIBLE_DEVICES=all \
|
|
||||||
-e PT_HPU_LAZY_MODE=0 -e OMPI_MCA_btl_vader_single_copy_mechanism=none \
|
|
||||||
-e HF_TOKEN=$hf_token --cap-add=sys_nice --ipc=host \
|
|
||||||
ghcr.io/huggingface/tgi-gaudi:2.0.5 --model-id $model --max-input-tokens 1024 --max-total-tokens 2048
|
|
||||||
```
|
|
||||||
|
|
||||||
iii. On 8 Gaudi cards:
|
|
||||||
```bash
|
```bash
|
||||||
model=meta-llama/Llama-2-70b-hf
|
model=meta-llama/Llama-2-70b-hf
|
||||||
hf_token=YOUR_ACCESS_TOKEN
|
hf_token=YOUR_ACCESS_TOKEN
|
||||||
|
Loading…
Reference in New Issue
Block a user