mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-25 20:12:07 +00:00
fix wording
This commit is contained in:
parent
eccb8a0099
commit
52cacff4a4
@ -8,15 +8,15 @@
|
||||
- local: supported_models
|
||||
title: Supported Models and Hardware
|
||||
- local: launch_parameters
|
||||
title: Configuring TGI
|
||||
title: Launch Parameters
|
||||
- local: guides
|
||||
title: Guides
|
||||
title: Getting started
|
||||
- sections:
|
||||
- local: basic_tutorials/consuming_tgi
|
||||
title: Consuming TGI
|
||||
- local: basic_tutorials/customize_inference
|
||||
title: Control/Customize Inference
|
||||
- local: basic_tutorials/request_parameters
|
||||
title: Request Parameters
|
||||
- local: basic_tutorials/stream
|
||||
title: Stream Responses
|
||||
- local: basic_tutorials/preparing_model
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Control/Customize Inference Generation with Text Generation Inference
|
||||
# Request Parameters for Text Generation Inference
|
||||
|
||||
Text Generation Inference support different parameters to control the generation, defining them in the `parameters`` attribute of the payload.
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Configuration parameters for Text Generation Inference
|
||||
# Launch Parameters for Text Generation Inference
|
||||
|
||||
Text Generation Inference allows you to customize the way you serve your models. You can use the following parameters to configure your server. You can enable them by adding them environment variables or by providing them as arguments when running `text-generation-launcher`. Environment variables are in `UPPER_CASE` and arguments are in `lower_case`.
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user