text-generation-inference/docs
Nicolas Patry 5df8059037
Auto max prefill (#2797)
* Attempt at automatic max batch prefill.

* Taking into account number of shards.

* Adding more cards.

* Adding A100 + H100

* Adding a few more cards.

* Logprobs cost too much.

* h100 better name, and keep factor of 2

* Damn inflated sparse tflops.

* Typo in h100.

* Updated the flops calculation (checked with fvcore).

* chunking by default.

* Fix prefix caching for chat completion since we removed logprobs.

* More tests.

* Dropping all the prefill logprobs.

* Add a flag that enables users to get logprobs back.

* Repairing prompt token counting.

* Fixing a few tests.

* Remove some scaffolding.

* Attempting to reduces the issues (workarounds for now).
2024-12-06 05:52:00 +01:00
..
source Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
index.html chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
openapi.json feat: auto max_new_tokens (#2803) 2024-12-06 05:50:35 +01:00
README.md Update documentation version to 2.0.4 (#1980) 2024-05-31 16:03:24 +02:00

Documentation available at: https://huggingface.co/docs/text-generation-inference

Release

When making a release, please update the latest version in the documentation with:

export OLD_VERSION="2\.0\.3"
export NEW_VERSION="2\.0\.4"
find . -name '*.md' -exec sed -i -e "s/$OLD_VERSION/$NEW_VERSION/g" {} \;