From 4446d0f8388650edee26a493f3ebff7f04cd0fb2 Mon Sep 17 00:00:00 2001 From: Merve Noyan Date: Sun, 20 Aug 2023 01:42:56 +0300 Subject: [PATCH] Create tensor_parallelism.md --- docs/source/conceptual/tensor_parallelism.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) create mode 100644 docs/source/conceptual/tensor_parallelism.md diff --git a/docs/source/conceptual/tensor_parallelism.md b/docs/source/conceptual/tensor_parallelism.md new file mode 100644 index 00000000..c29eb6aa --- /dev/null +++ b/docs/source/conceptual/tensor_parallelism.md @@ -0,0 +1,13 @@ +# Tensor Parallelism + +Tensor Paralellism (also called horizontal model paralellism) is a technique used to fit a large model in multiple GPUs. Model parallelism enables large model training and inference by putting different layers in different GPUs (also called ranks). Intermediate outputs between ranks are sent and received from one rank to another in a synchronous or asynchronous manner. When multiplying input with weights for inference, multiplying input with weights directly is equivalent to dividing weight matrix column-wise, multiplying each column with input separately, and then concatenating the separate outputs like below 👇 + +![Image courtesy of Anton Lozkhov](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/tgi/TP.png) + +In TGI, tensor parallelism is implemented under the hood by sharding weights and placing them in different ranks. The matrix multiplications then take place in different ranks and are then gathered into single tensor. + + + +Tensor Parallelism only works for model with custom kernels. + +