Главная
Study mode:
on
1
- Intro & Overview
2
- Softmax Attention & Transformers
3
- Quadratic Complexity of Softmax Attention
4
- Generalized Attention Mechanism
5
- Kernels
6
- Linear Attention
7
- Experiments
8
- Intuition on Linear Attention
9
- Connecting Autoregressive Transformers and RNNs
10
- Caveats with the RNN connection
11
- More Results & Conclusion
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore a comprehensive video explanation of the paper "Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention." Delve into the reformulation of the attention mechanism using kernel functions, resulting in a linear formulation that reduces computational and memory requirements. Discover the surprising connection between autoregressive transformers and RNNs. Learn about softmax attention, quadratic complexity, generalized attention mechanisms, kernels, linear attention, and experimental results. Gain insights into the intuition behind linear attention and understand the caveats of the RNN connection. This 48-minute video by Yannic Kilcher breaks down complex concepts, making them accessible to those interested in AI, attention mechanisms, transformers, and deep learning.

Transformers Are RNNs- Fast Autoregressive Transformers With Linear Attention

Yannic Kilcher
Add to list
0:00 / 0:00