mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 14:52:20 +00:00
* Upgrade the version number. * Remove modifications in Lock. * Tmp branch to test transformers backend with 2.5.1 and TP>1 * Fixing the transformers backend. inference_mode forces the use of `aten.matmul` instead of `aten.mm` the former doesn't have sharding support crashing the transformers TP support. `lm_head.forward` also crashes because it skips the hook that cast/decast the DTensor. Torch 2.5.1 is required for sharding support. * Put back the attention impl. * Revert the flashinfer (this will fails). * Building AOT. * Using 2.5 kernels. * Remove the archlist, it's defined in the docker anyway. |
||
---|---|---|
.. | ||
consuming_tgi.md | ||
gated_model_access.md | ||
monitoring.md | ||
non_core_models.md | ||
preparing_model.md | ||
safety.md | ||
train_medusa.md | ||
using_cli.md | ||
using_guidance.md | ||
visual_language_models.md |