mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-09-10 20:04:52 +00:00
add reference to Inferentia2 in the doc
This commit is contained in:
parent
e6b3a1e0a8
commit
b847cf33b2
@ -48,3 +48,4 @@ TGI also has support of ROCm-enabled AMD Instinct MI210 and MI250 GPUs, with pag
|
||||
|
||||
TGI is also supported on the following AI hardware accelerators:
|
||||
- *Habana first-gen Gaudi and Gaudi2:* check out this [example](https://github.com/huggingface/optimum-habana/tree/main/text-generation-inference) how to serve models with TGI on Gaudi and Gaudi2 with [Optimum Habana](https://huggingface.co/docs/optimum/habana/index)
|
||||
* *AWS Inferentia2:* check out this [guide](https://github.com/huggingface/optimum-neuron/tree/main/text-generation-inference) on how to serve models with TGI on Inferentia2.
|
||||
|
Loading…
Reference in New Issue
Block a user