From 2035f3b7bcbba1e4a81ae1d163464241b3557c8f Mon Sep 17 00:00:00 2001 From: Merve Noyan Date: Mon, 21 Aug 2023 11:31:30 +0300 Subject: [PATCH] Update docs/source/conceptual/flash_attention.md Co-authored-by: Nicolas Patry --- docs/source/conceptual/flash_attention.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/conceptual/flash_attention.md b/docs/source/conceptual/flash_attention.md index 47511cff..042b3e12 100644 --- a/docs/source/conceptual/flash_attention.md +++ b/docs/source/conceptual/flash_attention.md @@ -1,6 +1,6 @@ # Flash Attention -Scaling transformer architecture is heavily bottlenecked by the self-attention mechanism, which has quadratic time and memory complexity. Recent developments in accelerator hardware are mainly focused on enhancing compute capacities and not memory and transferring data between hardware. This results in attention operation having a bottleneck in memory, also called _memory-bound_. Flash Attention is an attention algorithm used to overcome this problem and scale transformer-based models more efficiently, enabling faster training and inference. +Scaling transformer architecture is heavily bottlenecked by the self-attention mechanism, which has quadratic time and memory complexity. Recent developments in accelerator hardware are mainly focused on enhancing compute capacities and not memory and transferring data between hardware. This results in attention operation having a bottleneck in memory, also called _memory-bound_. Flash Attention is an attention algorithm used to reduce this problem and scale transformer-based models more efficiently, enabling faster training and inference. In standard attention implementation, the cost of loading and writing keys, queries, and values from High Bandwidth Memory (HBM) is high. It loads key, query, value from HBM to GPU, performs a single step of the attention mechanism and writes it back to HBM, and repeats this for every singular step of the attention. Instead, Flash Attention loads keys, queries, and values once, fuses the operations of the attention mechanism and writes them back. It is implemented for models with custom kernels, you can check out the full list of models that support Flash Attention [here](https://github.com/huggingface/text-generation-inference/tree/main/server/text_generation_server/models).