From fcee7035c0ed09745deca95d571b77f425cb1d0d Mon Sep 17 00:00:00 2001 From: Felix Marty Date: Thu, 25 Jan 2024 18:19:15 +0000 Subject: [PATCH] update doc --- docs/source/supported_models.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/supported_models.md b/docs/source/supported_models.md index 004790ab..df5102c2 100644 --- a/docs/source/supported_models.md +++ b/docs/source/supported_models.md @@ -43,8 +43,8 @@ text-generation-launcher --model-id TGI optimized models are supported on NVIDIA [A100](https://www.nvidia.com/en-us/data-center/a100/), [A10G](https://www.nvidia.com/en-us/data-center/products/a10-gpu/) and [T4](https://www.nvidia.com/en-us/data-center/tesla-t4/) GPUs with CUDA 12.2+. Note that you have to install [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) to use it. For other NVIDIA GPUs, continuous batching will still apply, but some operations like flash attention and paged attention will not be executed. -TGI also has support of ROCm-enabled AMD Instinct MI210 and MI250 GPUs, with paged attention and flash attention v2 support. The following features are currently not supported in the ROCm version of TGI, and the supported may be extended in the future: -* Quantization (GPTQ, AWQ, etc.) +TGI also has support of ROCm-enabled AMD Instinct MI210 and MI250 GPUs, with paged attention, GPTQ quantization, flash attention v2 support. The following features are currently not supported in the ROCm version of TGI, and the supported may be extended in the future: +* Loading [AWQ](https://huggingface.co/docs/transformers/quantization#awq) checkpoints. * Flash [layer norm kernel](https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm) * Kernel for slinding window attention (Mistral)