From 8bfd857f036b9613b4388e82d6b1e22f5039569c Mon Sep 17 00:00:00 2001 From: lewtun Date: Mon, 11 Mar 2024 10:29:40 +0100 Subject: [PATCH] Use a better model for the quick tour Falcon models are long superseded by better models like Zephyr and OpenHermes. This PR updates the docs accordingly --- docs/source/quicktour.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/quicktour.md b/docs/source/quicktour.md index 07dddfa8..70cf575c 100644 --- a/docs/source/quicktour.md +++ b/docs/source/quicktour.md @@ -2,10 +2,10 @@ The easiest way of getting started is using the official Docker container. Install Docker following [their installation instructions](https://docs.docker.com/get-docker/). -Let's say you want to deploy [Falcon-7B Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) model with TGI. Here is an example on how to do that: +Let's say you want to deploy [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model with TGI. Here is an example on how to do that: ```bash -model=tiiuae/falcon-7b-instruct +model=teknium/OpenHermes-2.5-Mistral-7B volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4 --model-id $model