mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-19 13:52:07 +00:00
📝 add guide on using TPU with TGI in the docs (#2907)
This commit is contained in:
parent
dc9b8e9814
commit
46994b34fb
@ -13,6 +13,8 @@
|
||||
title: Using TGI with Intel Gaudi
|
||||
- local: installation_inferentia
|
||||
title: Using TGI with AWS Inferentia
|
||||
- local: installation_tpu
|
||||
title: Using TGI with Google TPU
|
||||
- local: installation_intel
|
||||
title: Using TGI with Intel GPUs
|
||||
- local: installation
|
||||
|
3
docs/source/installation_tpu.md
Normal file
3
docs/source/installation_tpu.md
Normal file
@ -0,0 +1,3 @@
|
||||
# Using TGI with Google TPU
|
||||
|
||||
Check out this [guide](https://huggingface.co/docs/optimum-tpu) on how to serve models with TGI on TPUs.
|
Loading…
Reference in New Issue
Block a user