Update tensor_parallelism.md

This commit is contained in:
Merve Noyan 2023-08-22 14:50:16 +03:00 committed by GitHub
parent bb8c24f5b7
commit 27baaeffe0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -1,13 +1,15 @@
# Tensor Parallelism
Tensor Paralellism (also called horizontal model paralellism) is a technique used to fit a large model in multiple GPUs. Model parallelism enables large model training and inference by putting different layers in different GPUs (also called ranks). Intermediate outputs between ranks are sent and received from one rank to another in a synchronous or asynchronous manner. When multiplying input with weights for inference, multiplying input with weights directly is equivalent to dividing weight matrix column-wise, multiplying each column with input separately, and then concatenating the separate outputs like below 👇
Tensor parallelism (also called horizontal model parallelism) is a technique used to fit a large model in multiple GPUs. Intermediate outputs between ranks are sent and received from one rank to another in a synchronous or asynchronous manner. When multiplying input with weights for inference, multiplying input with weights directly is equivalent to dividing the weight matrix column-wise, multiplying each column with input separately, and then concatenating the separate outputs like below 👇
![Image courtesy of Anton Lozkhov](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/tgi/TP.png)
In TGI, tensor parallelism is implemented under the hood by sharding weights and placing them in different ranks. The matrix multiplications then take place in different ranks and are then gathered into single tensor.
In TGI, tensor parallelism is implemented under the hood by sharding weights and placing them in different ranks. The matrix multiplications then take place in different ranks and are then gathered into a single tensor.
<Tip warning={true}>
Tensor Parallelism only works for model officially supported, it will not work when falling back on `transformers`.
Tensor Parallelism only works for models officially supported, it will not work when falling back on `transformers`.
</Tip>
You can learn more in-depth on tensor-parallelism from transformers docs in this [link](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_many#tensor-parallelism).