mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-09-11 12:24:53 +00:00
doc: Formatting
This commit is contained in:
parent
491b5726a6
commit
7788a6b849
@ -6,8 +6,8 @@ whether you prioritize performance, ease of use, or compatibility with specific
|
|||||||
TGI remains consistent across backends, allowing you to switch between them seamlessly.
|
TGI remains consistent across backends, allowing you to switch between them seamlessly.
|
||||||
|
|
||||||
**Supported backends:**
|
**Supported backends:**
|
||||||
* TGI CUDA backend: This high-performance backend is optimized for NVIDIA GPUs and serves as the default option
|
* **TGI CUDA backend**: This high-performance backend is optimized for NVIDIA GPUs and serves as the default option
|
||||||
within TGI. Developed in-house, it boasts numerous optimizations and is used in production by various projects, including those by Hugging Face.
|
within TGI. Developed in-house, it boasts numerous optimizations and is used in production by various projects, including those by Hugging Face.
|
||||||
* [TGI TRTLLM backend](./backends/trtllm): This backend leverages NVIDIA's TensorRT library to accelerate LLM inference.
|
* **[TGI TRTLLM backend](./backends/trtllm)**: This backend leverages NVIDIA's TensorRT library to accelerate LLM inference.
|
||||||
It utilizes specialized optimizations and custom kernels for enhanced performance.
|
It utilizes specialized optimizations and custom kernels for enhanced performance.
|
||||||
However, it requires a model-specific compilation step for each GPU architecture.
|
However, it requires a model-specific compilation step for each GPU architecture.
|
Loading…
Reference in New Issue
Block a user