text-generation-inference/docs
Nicolas Patry 29a0893b67
Tmp tp transformers (#2942)
* Upgrade the version number.

* Remove modifications in Lock.

* Tmp branch to test transformers backend with 2.5.1 and TP>1

* Fixing the transformers backend.

inference_mode forces the use of `aten.matmul` instead of `aten.mm` the
former doesn't have sharding support crashing the transformers TP
support.

`lm_head.forward` also crashes because it skips the hook that
cast/decast the DTensor.

Torch 2.5.1 is required for sharding support.

* Put back the attention impl.

* Revert the flashinfer (this will fails).

* Building AOT.

* Using 2.5 kernels.

* Remove the archlist, it's defined in the docker anyway.
2025-01-23 18:07:30 +01:00
..
source Tmp tp transformers (#2942) 2025-01-23 18:07:30 +01:00
index.html chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
openapi.json Fixing CI. (#2846) 2024-12-16 10:58:15 +01:00
README.md Update documentation version to 2.0.4 (#1980) 2024-05-31 16:03:24 +02:00

Documentation available at: https://huggingface.co/docs/text-generation-inference

Release

When making a release, please update the latest version in the documentation with:

export OLD_VERSION="2\.0\.3"
export NEW_VERSION="2\.0\.4"
find . -name '*.md' -exec sed -i -e "s/$OLD_VERSION/$NEW_VERSION/g" {} \;