From d8f1337e7e6071e5a815720a7382072cf7694216 Mon Sep 17 00:00:00 2001
From: Pasquale Minervini
Date: Mon, 14 Aug 2023 15:41:13 +0200
Subject: [PATCH] README edit -- running the service with no GPU or CUDA
support (#773)
One-line addition to the README to show how to run the service on a
machine without GPUs or CUDA support (e.g., for local prototyping)
---------
Co-authored-by: Nicolas Patry
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index aa5ccaad..e334b800 100644
--- a/README.md
+++ b/README.md
@@ -88,7 +88,7 @@ volume=$PWD/data # share a volume with the Docker container to avoid downloading
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.0.1 --model-id $model
```
-**Note:** To use GPUs, you need to install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html). We also recommend using NVIDIA drivers with CUDA version 11.8 or higher.
+**Note:** To use GPUs, you need to install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html). We also recommend using NVIDIA drivers with CUDA version 11.8 or higher. For running the Docker container on a machine with no GPUs or CUDA support, it is enough to remove the `--gpus all` flag and add `--disable-custom-kernels`, please note CPU is not the intended platform for this project, so performance might be subpar.
To see all options to serve your models (in the [code](https://github.com/huggingface/text-generation-inference/blob/main/launcher/src/main.rs) or in the cli:
```