text-generation-inference/router/src
Nicolas Patry 5df8059037
Auto max prefill (#2797)
* Attempt at automatic max batch prefill.

* Taking into account number of shards.

* Adding more cards.

* Adding A100 + H100

* Adding a few more cards.

* Logprobs cost too much.

* h100 better name, and keep factor of 2

* Damn inflated sparse tflops.

* Typo in h100.

* Updated the flops calculation (checked with fvcore).

* chunking by default.

* Fix prefix caching for chat completion since we removed logprobs.

* More tests.

* Dropping all the prefill logprobs.

* Add a flag that enables users to get logprobs back.

* Repairing prompt token counting.

* Fixing a few tests.

* Remove some scaffolding.

* Attempting to reduces the issues (workarounds for now).
2024-12-06 05:52:00 +01:00
..
infer feat: auto max_new_tokens (#2803) 2024-12-06 05:50:35 +01:00
config.rs Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
kserve.rs fix: simplify kserve endpoint and fix imports (#2119) 2024-06-25 19:30:10 -04:00
lib.rs Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
logging.rs Rebase TRT-llm (#2331) 2024-07-31 10:33:10 +02:00
sagemaker.rs feat: allow any supported payload on /invocations (#2683) 2024-10-23 11:26:01 +00:00
server.rs Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
usage_stats.rs feat: allow any supported payload on /invocations (#2683) 2024-10-23 11:26:01 +00:00
validation.rs feat: auto max_new_tokens (#2803) 2024-12-06 05:50:35 +01:00
vertex.rs Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00