* Upgrade the version number.
* Remove modifications in Lock.
* Tmp branch to test transformers backend with 2.5.1 and TP>1
* Fixing the transformers backend.
inference_mode forces the use of `aten.matmul` instead of `aten.mm` the
former doesn't have sharding support crashing the transformers TP
support.
`lm_head.forward` also crashes because it skips the hook that
cast/decast the DTensor.
Torch 2.5.1 is required for sharding support.
* Put back the attention impl.
* Revert the flashinfer (this will fails).
* Building AOT.
* Using 2.5 kernels.
* Remove the archlist, it's defined in the docker anyway.
* doc: Add metrics documentation and add a 'Reference' section
* doc: Add API reference
* doc: Refactor API reference
* fix: Message API link
* Bad rebase
* Moving the docs.
---------
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>