mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-09-10 03:44:54 +00:00
Bring back the old table in README
This commit is contained in:
parent
a0e5fc4189
commit
080542dbca
20
README.md
20
README.md
@ -21,23 +21,21 @@ to power LLMs api-inference widgets.
|
|||||||
|
|
||||||
## Table of contents
|
## Table of contents
|
||||||
|
|
||||||
- [Text Generation Inference](#text-generation-inference)
|
- [Features](#features)
|
||||||
- [Table of contents](#table-of-contents)
|
- [Optimized Architectures](#optimized-architectures)
|
||||||
- [Features](#features)
|
- [Get Started](#get-started)
|
||||||
- [Optimized architectures](#optimized-architectures)
|
|
||||||
- [Get started](#get-started)
|
|
||||||
- [Docker](#docker)
|
- [Docker](#docker)
|
||||||
- [API documentation](#api-documentation)
|
- [API Documentation](#api-documentation)
|
||||||
|
- [A note on Shared Memory](#a-note-on-shared-memory-shm)
|
||||||
- [Distributed Tracing](#distributed-tracing)
|
- [Distributed Tracing](#distributed-tracing)
|
||||||
- [A note on Shared Memory (shm)](#a-note-on-shared-memory-shm)
|
- [Local Install](#local-install)
|
||||||
- [Local install](#local-install)
|
|
||||||
- [CUDA Kernels](#cuda-kernels)
|
- [CUDA Kernels](#cuda-kernels)
|
||||||
- [Run BLOOM](#run-bloom)
|
- [Run BLOOM](#run-bloom)
|
||||||
- [Download](#download)
|
- [Download](#download)
|
||||||
- [Run](#run)
|
- [Run](#run)
|
||||||
- [Quantization](#quantization)
|
- [Quantization](#quantization)
|
||||||
- [Develop](#develop)
|
- [Develop](#develop)
|
||||||
- [Testing](#testing)
|
- [Testing](#testing)
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user