text-generation-inference/docs/source/conceptual
Nicolas Patry 29a0893b67
Tmp tp transformers (#2942)
* Upgrade the version number.

* Remove modifications in Lock.

* Tmp branch to test transformers backend with 2.5.1 and TP>1

* Fixing the transformers backend.

inference_mode forces the use of `aten.matmul` instead of `aten.mm` the
former doesn't have sharding support crashing the transformers TP
support.

`lm_head.forward` also crashes because it skips the hook that
cast/decast the DTensor.

Torch 2.5.1 is required for sharding support.

* Put back the attention impl.

* Revert the flashinfer (this will fails).

* Building AOT.

* Using 2.5 kernels.

* Remove the archlist, it's defined in the docker anyway.
2025-01-23 18:07:30 +01:00
..
chunking.md Small update to docs (#2816) 2024-12-10 10:46:26 +01:00
external.md Add links to Adyen blogpost (#2500) 2024-09-06 17:00:54 +02:00
flash_attention.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
guidance.md Add support for exl2 quantization 2024-05-30 11:28:05 +02:00
lora.md CI (2592): Allow LoRA adapter revision in server launcher (#2602) 2024-10-02 10:51:04 -04:00
paged_attention.md Paged Attention Conceptual Guide (#901) 2023-09-08 14:18:42 +02:00
quantization.md Tmp tp transformers (#2942) 2025-01-23 18:07:30 +01:00
safetensors.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
speculation.md docs(conceptual/speculation): available links Train Medusa (#2863) 2025-01-15 16:05:54 +01:00
streaming.md Add links to Adyen blogpost (#2500) 2024-09-06 17:00:54 +02:00
tensor_parallelism.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00