diff --git a/docs/source/_toctree.yml b/docs/source/_toctree.yml index ab85682b7..8fcba516b 100644 --- a/docs/source/_toctree.yml +++ b/docs/source/_toctree.yml @@ -14,7 +14,7 @@ - local: installation_inferentia title: Using TGI with AWS Inferentia - local: installation_tpu - title: Using TGI with Google TPU + title: Using TGI with Google TPUs - local: installation_intel title: Using TGI with Intel GPUs - local: installation diff --git a/docs/source/installation_tpu.md b/docs/source/installation_tpu.md index 208ebce3c..559e83aa7 100644 --- a/docs/source/installation_tpu.md +++ b/docs/source/installation_tpu.md @@ -1,3 +1,3 @@ -# Using TGI with Google TPU +# Using TGI with Google TPUs Check out this [guide](https://huggingface.co/docs/optimum-tpu) on how to serve models with TGI on TPUs.