mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-19 22:02:06 +00:00
* Upgrade the version number. * Remove modifications in Lock. * Tmp branch to test transformers backend with 2.5.1 and TP>1 * Fixing the transformers backend. inference_mode forces the use of `aten.matmul` instead of `aten.mm` the former doesn't have sharding support crashing the transformers TP support. `lm_head.forward` also crashes because it skips the hook that cast/decast the DTensor. Torch 2.5.1 is required for sharding support. * Put back the attention impl. * Revert the flashinfer (this will fails). * Building AOT. * Using 2.5 kernels. * Remove the archlist, it's defined in the docker anyway. |
||
---|---|---|
.. | ||
chunking.md | ||
external.md | ||
flash_attention.md | ||
guidance.md | ||
lora.md | ||
paged_attention.md | ||
quantization.md | ||
safetensors.md | ||
speculation.md | ||
streaming.md | ||
tensor_parallelism.md |