mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-19 13:52:07 +00:00
parent
203cade244
commit
1470aec9d9
@ -14,7 +14,7 @@
|
||||
- local: installation_inferentia
|
||||
title: Using TGI with AWS Inferentia
|
||||
- local: installation_tpu
|
||||
title: Using TGI with Google TPU
|
||||
title: Using TGI with Google TPUs
|
||||
- local: installation_intel
|
||||
title: Using TGI with Intel GPUs
|
||||
- local: installation
|
||||
|
@ -1,3 +1,3 @@
|
||||
# Using TGI with Google TPU
|
||||
# Using TGI with Google TPUs
|
||||
|
||||
Check out this [guide](https://huggingface.co/docs/optimum-tpu) on how to serve models with TGI on TPUs.
|
||||
|
Loading…
Reference in New Issue
Block a user