text-generation-inference/backends
OlivierDehaene 8c3669b287
feat: auto max_new_tokens (#2803)
* feat: auto max_new_tokens

* update default

* Fixing the tests.

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2024-12-06 05:50:35 +01:00
..
client Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
grpc-metadata Rebase TRT-llm (#2331) 2024-07-31 10:33:10 +02:00
trtllm feat: add payload limit (#2726) 2024-11-21 18:20:15 +00:00
v2 feat: auto max_new_tokens (#2803) 2024-12-06 05:50:35 +01:00
v3 feat: auto max_new_tokens (#2803) 2024-12-06 05:50:35 +01:00