text-generation-inference/docs/source/basic_tutorials/using_cli.md
Pedro Cuenca 3ab578b416
[docs] Fix link to Install CLI (#1526)
# What does this PR do?

Attempts to fix a link from Using TGI CLI to Installation.


## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
2024-02-02 14:05:30 +01:00

1.1 KiB

Using TGI CLI

You can use TGI command-line interface (CLI) to download weights, serve and quantize models, or get information on serving parameters. To install the CLI, please refer to the installation section.

text-generation-server lets you download the model with download-weights command like below 👇

text-generation-server download-weights MODEL_HUB_ID

You can also use it to quantize models like below 👇

text-generation-server quantize MODEL_HUB_ID OUTPUT_DIR 

You can use text-generation-launcher to serve models.

text-generation-launcher --model-id MODEL_HUB_ID --port 8080

There are many options and parameters you can pass to text-generation-launcher. The documentation for CLI is kept minimal and intended to rely on self-generating documentation, which can be found by running

text-generation-launcher --help

You can also find it hosted in this Swagger UI.

Same documentation can be found for text-generation-server.

text-generation-server --help