From 8f1d266e69205ce22ca1fe1dc30e9d8f94658eb8 Mon Sep 17 00:00:00 2001 From: Merve Noyan Date: Fri, 18 Aug 2023 17:14:52 +0300 Subject: [PATCH] Update consuming_tgi.md --- docs/source/basic_tutorials/consuming_tgi.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/docs/source/basic_tutorials/consuming_tgi.md b/docs/source/basic_tutorials/consuming_tgi.md index b69cee2e..a3262e16 100644 --- a/docs/source/basic_tutorials/consuming_tgi.md +++ b/docs/source/basic_tutorials/consuming_tgi.md @@ -143,13 +143,9 @@ You can try the demo directly here 👇 You can disable streaming mode using `return` instead of `yield` in your inference function, like below. -```diff +```python def inference(message, history): - partial_message = "" - for token in client.text_generation(message, max_new_tokens=20, stream=True): - partial_message += token -- yield partial_message -+ return partial_message + return client.text_generation(message, max_new_tokens=20) ``` You can read more about how to customize a `ChatInterface` [here](https://www.gradio.app/guides/creating-a-chatbot-fast).