From b33a66148cad8b2b6bfe78046ccdb2ed9b7d1a8b Mon Sep 17 00:00:00 2001 From: Merve Noyan Date: Tue, 22 Aug 2023 23:35:16 +0300 Subject: [PATCH] Update and rename custom_models.md to non_core_models.md --- .../basic_tutorials/{custom_models.md => non_core_models.md} | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename docs/source/basic_tutorials/{custom_models.md => non_core_models.md} (97%) diff --git a/docs/source/basic_tutorials/custom_models.md b/docs/source/basic_tutorials/non_core_models.md similarity index 97% rename from docs/source/basic_tutorials/custom_models.md rename to docs/source/basic_tutorials/non_core_models.md index ec852e36..f6a8dc8e 100644 --- a/docs/source/basic_tutorials/custom_models.md +++ b/docs/source/basic_tutorials/non_core_models.md @@ -1,4 +1,4 @@ -# Custom Model Serving +# Non-core Model Serving TGI supports various LLM architectures (see full list [here](https://github.com/huggingface/text-generation-inference#optimized-architectures)). If you wish to serve a model that is not one of the supported models, TGI will fallback to transformers implementation of that model. They can be loaded by: @@ -18,4 +18,4 @@ You can serve these models using docker like below 👇 ```bash docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id gpt2 -``` \ No newline at end of file +```