mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-09-10 20:04:52 +00:00
Update docs/source/basic_tutorials/non_core_models.md
Co-authored-by: OlivierDehaene <olivier@huggingface.co>
This commit is contained in:
parent
873573150f
commit
0966704dd6
@ -17,7 +17,8 @@ docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingf
|
||||
Finally, if the model is not on Hugging Face Hub but on your local, you can pass the path to the folder that contains your model like below 👇
|
||||
|
||||
```bash
|
||||
docker run --platform linux/x86_64 --shm-size 1g --net=host -p 8080:80 -v $volume:/data -e CUDA_VISIBLE_DEVICES= ghcr.io/huggingface/text-generation-inference:latest --model-id <PATH-TO-FOLDER>
|
||||
# Make sure your model is in the $volume directory
|
||||
docker run --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id /data/<PATH-TO-FOLDER>
|
||||
```
|
||||
|
||||
You can refer to [transformers docs on custom models](https://huggingface.co/docs/transformers/main/en/custom_models) for more information.
|
||||
|
Loading…
Reference in New Issue
Block a user