mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
* misc(cmake) update dependencies * feat(hardware) enable new hardware.hpp and unittests * test(ctest) enable address sanitizer * feat(backend): initial rewrite of the backend for simplicity * feat(backend): remove all the logs from hardware.hpp * feat(backend): added some logging * feat(backend): enable compiler warning if support for RVO not applying * feat(backend): missing return statement * feat(backend): introduce backend_workspace_t to store precomputed information from the engine folder * feat(backend): delete previous backend impl * feat(backend): more impl * feat(backend): use latest trtllm main version to have g++ >= 13 compatibility * feat(backend): allow overriding which Python to use * feat(backend): fix backend_exception_t -> backend_error_t naming * feat(backend): impl missing generation_step_t as return value of pull_tokens * feat(backend): make backend_workspace_t::engines_folder constexpr * feat(backend): fix main.rs retrieving the tokenizer * feat(backend): add guard to multiple header definitions * test(backend): add more unittest * feat(backend): remove constexpr from par * feat(backend): remove constexpig * test(backend): more test coverage * chore(trtllm): update dependency towards 0.15.0 * effectively cancel the request on the executor * feat(backend) fix moving backend when pulling * feat(backend): make sure we can easily cancel request on the executor * feat(backend): fix missing "0" field access * misc(backend): fix reborrowing Pin<&mut T> as described in the doc https://doc.rust-lang.org/stable/std/pin/struct.Pin.html#method.as_mut * chore: Add doc and CI for TRTLLM (#2799) * chore: Add doc and CI for TRTLLM * chore: Add doc and CI for TRTLLM * chore: Add doc and CI for TRTLLM * chore: Add doc and CI for TRTLLM * doc: Formatting * misc(backend): indent --------- Co-authored-by: Hugo Larcher <hugo.larcher@huggingface.co>
88 lines
2.5 KiB
YAML
88 lines
2.5 KiB
YAML
- sections:
|
|
- local: index
|
|
title: Text Generation Inference
|
|
- local: quicktour
|
|
title: Quick Tour
|
|
- local: supported_models
|
|
title: Supported Models
|
|
- local: installation_nvidia
|
|
title: Using TGI with Nvidia GPUs
|
|
- local: installation_amd
|
|
title: Using TGI with AMD GPUs
|
|
- local: installation_gaudi
|
|
title: Using TGI with Intel Gaudi
|
|
- local: installation_inferentia
|
|
title: Using TGI with AWS Inferentia
|
|
- local: installation_intel
|
|
title: Using TGI with Intel GPUs
|
|
- local: installation
|
|
title: Installation from source
|
|
- local: multi_backend_support
|
|
title: Multi-backend support
|
|
|
|
- local: architecture
|
|
title: Internal Architecture
|
|
- local: usage_statistics
|
|
title: Usage Statistics
|
|
title: Getting started
|
|
- sections:
|
|
- local: basic_tutorials/consuming_tgi
|
|
title: Consuming TGI
|
|
- local: basic_tutorials/preparing_model
|
|
title: Preparing Model for Serving
|
|
- local: basic_tutorials/gated_model_access
|
|
title: Serving Private & Gated Models
|
|
- local: basic_tutorials/using_cli
|
|
title: Using TGI CLI
|
|
- local: basic_tutorials/non_core_models
|
|
title: Non-core Model Serving
|
|
- local: basic_tutorials/safety
|
|
title: Safety
|
|
- local: basic_tutorials/using_guidance
|
|
title: Using Guidance, JSON, tools
|
|
- local: basic_tutorials/visual_language_models
|
|
title: Visual Language Models
|
|
- local: basic_tutorials/monitoring
|
|
title: Monitoring TGI with Prometheus and Grafana
|
|
- local: basic_tutorials/train_medusa
|
|
title: Train Medusa
|
|
title: Tutorials
|
|
- sections:
|
|
- local: backends/trtllm
|
|
title: TensorRT-LLM
|
|
title: Backends
|
|
- sections:
|
|
- local: reference/launcher
|
|
title: All TGI CLI options
|
|
- local: reference/metrics
|
|
title: Exported Metrics
|
|
- local: reference/api_reference
|
|
title: API Reference
|
|
title: Reference
|
|
- sections:
|
|
- local: conceptual/chunking
|
|
title: V3 update, caching and chunking
|
|
- local: conceptual/streaming
|
|
title: Streaming
|
|
- local: conceptual/quantization
|
|
title: Quantization
|
|
- local: conceptual/tensor_parallelism
|
|
title: Tensor Parallelism
|
|
- local: conceptual/paged_attention
|
|
title: PagedAttention
|
|
- local: conceptual/safetensors
|
|
title: Safetensors
|
|
- local: conceptual/flash_attention
|
|
title: Flash Attention
|
|
- local: conceptual/speculation
|
|
title: Speculation (Medusa, ngram)
|
|
- local: conceptual/guidance
|
|
title: How Guidance Works (via outlines)
|
|
- local: conceptual/lora
|
|
title: LoRA (Low-Rank Adaptation)
|
|
- local: conceptual/external
|
|
title: External Resources
|
|
|
|
|
|
title: Conceptual Guides
|