From 45978034c9cb7033a6b7a126f5571d3c72753f27 Mon Sep 17 00:00:00 2001 From: Nicolas Patry Date: Fri, 26 Jan 2024 10:15:31 +0100 Subject: [PATCH] Pre-emptive on sealion. --- docs/source/supported_models.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/source/supported_models.md b/docs/source/supported_models.md index 004790ab..6f73b7ff 100644 --- a/docs/source/supported_models.md +++ b/docs/source/supported_models.md @@ -21,6 +21,7 @@ The following models are optimized and can be served with TGI, which uses custom - [Code Llama](https://huggingface.co/codellama) - [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) - [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) +- [Sealion](https://huggingface.co/aisingapore/sealion7b) - [Phi](https://huggingface.co/microsoft/phi-2) If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models: