mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-09-11 20:34:54 +00:00
WTF ?
This commit is contained in:
parent
18da570060
commit
b3be512efc
@ -45,4 +45,4 @@ If you wish to serve a supported model that already exists on a local folder, ju
|
||||
|
||||
```bash
|
||||
text-generation-launcher --model-id <PATH-TO-LOCAL-BLOOM>
|
||||
``````
|
||||
```
|
||||
|
@ -24,21 +24,7 @@ If you wish to serve a supported model that already exists on a local folder, ju
|
||||
|
||||
```bash
|
||||
text-generation-launcher --model-id <PATH-TO-LOCAL-BLOOM>
|
||||
``````
|
||||
|
||||
|
||||
## Supported Hardware
|
||||
|
||||
TGI optimized models are supported on NVIDIA [A100](https://www.nvidia.com/en-us/data-center/a100/), [A10G](https://www.nvidia.com/en-us/data-center/products/a10-gpu/) and [T4](https://www.nvidia.com/en-us/data-center/tesla-t4/) GPUs with CUDA 12.2+. Note that you have to install [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) to use it. For other NVIDIA GPUs, continuous batching will still apply, but some operations like flash attention and paged attention will not be executed.
|
||||
|
||||
TGI also has support of ROCm-enabled AMD Instinct MI210 and MI250 GPUs, with paged attention, GPTQ quantization, flash attention v2 support. The following features are currently not supported in the ROCm version of TGI, and the supported may be extended in the future:
|
||||
* Loading [AWQ](https://huggingface.co/docs/transformers/quantization#awq) checkpoints.
|
||||
* Flash [layer norm kernel](https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm)
|
||||
* Kernel for sliding window attention (Mistral)
|
||||
|
||||
TGI is also supported on the following AI hardware accelerators:
|
||||
- *Habana first-gen Gaudi and Gaudi2:* check out this [repository](https://github.com/huggingface/tgi-gaudi) to serve models with TGI on Gaudi and Gaudi2 with [Optimum Habana](https://huggingface.co/docs/optimum/habana/index)
|
||||
* *AWS Inferentia2:* check out this [guide](https://github.com/huggingface/optimum-neuron/tree/main/text-generation-inference) on how to serve models with TGI on Inferentia2.
|
||||
```
|
||||
"""
|
||||
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user