mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 06:42:10 +00:00
* feat: first draft load multiple lora * feat: load weights within layer and refactor lora pass * fix: refactor and reduce lora math * feat: baseline impl single request multi lora support * feat: prefer lorax implementation and port loading logic * fix: prefer adapter_data and refactors * feat: perfer loraxs custom punica kernels and add mlp loras * fix: adjust batch for bgmv * fix: adjust adapter_segments logic when in batch * fix: refactor and move changes to v3 proto * fix: pass model_id for all flash causal lms * fix: pass model_id for all causal and seq2seq lms * fix: add model_id to model test * feat: add lora support to mistral and refactors * feat: prefer model id in request * fix: include rust code for adapter id * feat: bump launcher and add new lora docs * feat: support base model generation and refactors * fix: rename doc to retry ci build * feat: support if vlm models * fix: add adapter_data param and avoid missing layers * fix: add adapter_data param to phi and neox * fix: update all models forwards to include adapter_data * fix: add model_id to IdeficsCausalLM * Update lora.md Fixed a typo * Update lora.md Fixing spam image * fix: add lora kernel to dockerfile, support running without kernels and refactors * fix: avoid dockerfile conflict * fix: refactors and adjust flash llama lora logic * fix: skip llama test due to CI issue (temp) * fix: skip llama test CI (temp) 2 * fix: revert skips and prefer updated ci token for tests * fix: refactors and helpful comments * fix: add noop in TensorParallelAdapterRowLinear too * fix: refactor and move shard_lora_weights logic * fix: exit early if no adapter_data --------- Co-authored-by: Derek <datavistics@gmail.com>
69 lines
2.1 KiB
YAML
69 lines
2.1 KiB
YAML
- sections:
|
|
- local: index
|
|
title: Text Generation Inference
|
|
- local: quicktour
|
|
title: Quick Tour
|
|
- local: installation_nvidia
|
|
title: Using TGI with Nvidia GPUs
|
|
- local: installation_amd
|
|
title: Using TGI with AMD GPUs
|
|
- local: installation_gaudi
|
|
title: Using TGI with Intel Gaudi
|
|
- local: installation_inferentia
|
|
title: Using TGI with AWS Inferentia
|
|
- local: installation
|
|
title: Installation from source
|
|
- local: supported_models
|
|
title: Supported Models and Hardware
|
|
- local: messages_api
|
|
title: Messages API
|
|
- local: architecture
|
|
title: Internal Architecture
|
|
title: Getting started
|
|
- sections:
|
|
- local: basic_tutorials/consuming_tgi
|
|
title: Consuming TGI
|
|
- local: basic_tutorials/preparing_model
|
|
title: Preparing Model for Serving
|
|
- local: basic_tutorials/gated_model_access
|
|
title: Serving Private & Gated Models
|
|
- local: basic_tutorials/using_cli
|
|
title: Using TGI CLI
|
|
- local: basic_tutorials/launcher
|
|
title: All TGI CLI options
|
|
- local: basic_tutorials/non_core_models
|
|
title: Non-core Model Serving
|
|
- local: basic_tutorials/safety
|
|
title: Safety
|
|
- local: basic_tutorials/using_guidance
|
|
title: Using Guidance, JSON, tools
|
|
- local: basic_tutorials/visual_language_models
|
|
title: Visual Language Models
|
|
- local: basic_tutorials/monitoring
|
|
title: Monitoring TGI with Prometheus and Grafana
|
|
- local: basic_tutorials/train_medusa
|
|
title: Train Medusa
|
|
title: Tutorials
|
|
- sections:
|
|
- local: conceptual/streaming
|
|
title: Streaming
|
|
- local: conceptual/quantization
|
|
title: Quantization
|
|
- local: conceptual/tensor_parallelism
|
|
title: Tensor Parallelism
|
|
- local: conceptual/paged_attention
|
|
title: PagedAttention
|
|
- local: conceptual/safetensors
|
|
title: Safetensors
|
|
- local: conceptual/flash_attention
|
|
title: Flash Attention
|
|
- local: conceptual/speculation
|
|
title: Speculation (Medusa, ngram)
|
|
- local: conceptual/guidance
|
|
title: How Guidance Works (via outlines
|
|
- local: conceptual/lora
|
|
title: LoRA (Low-Rank Adaptation)
|
|
|
|
|
|
title: Conceptual Guides
|