Move launch locally to an installation section

This commit is contained in:
osanseviero 2023-08-10 10:19:55 +02:00
parent 33e6d264ab
commit 9304e07423
2 changed files with 11 additions and 4 deletions

View File

@ -3,12 +3,12 @@
title: Text Generation Inference title: Text Generation Inference
- local: quicktour - local: quicktour
title: Quick Tour title: Quick Tour
- local: installation
title: Installation
- local: supported_models - local: supported_models
title: Supported Models and Hardware title: Supported Models and Hardware
title: Getting started title: Getting started
- sections: - sections:
- local: basic_tutorials/local_launch
title: Installing from the Source and Launching TGI
- local: basic_tutorials/consuming_tgi - local: basic_tutorials/consuming_tgi
title: Consuming TGI title: Consuming TGI
- local: basic_tutorials/preparing_model - local: basic_tutorials/preparing_model

View File

@ -1,9 +1,16 @@
# Installing from the Source and Launching TGI # Installation
This section explains how to install the CLI tool as well as installing TGI from source. **The strongly recommended approach is to use Docker, as it does not require much setup. Check [the Quick Tour](./quicktour) to learn how to run TGI with Docker.**
## Install CLI
TODO
Before you start, you will need to setup your environment, and install Text Generation Inference. Text Generation Inference is tested on **Python 3.9+**.
## Local Installation from Source ## Local Installation from Source
Before you start, you will need to setup your environment, and install Text Generation Inference. Text Generation Inference is tested on **Python 3.9+**.
Text Generation Inference is available on pypi, conda and GitHub. Text Generation Inference is available on pypi, conda and GitHub.
To install and launch locally, first [install Rust](https://rustup.rs/) and create a Python virtual environment with at least To install and launch locally, first [install Rust](https://rustup.rs/) and create a Python virtual environment with at least