Главная
Study mode:
on
1
Intro
2
Context
3
The intuition
4
Encoder
5
Encoder block
6
Self-attention
7
Matrices
8
Input matrix
9
Query, key, value matrices
10
Self-attention formula
11
Self-attention: Step 1
12
Self-attention: Step 2
13
Self-attention: Step 3
14
Self-attention: Step 4
15
Self-attention: Visual recap
16
Multi-head attention
17
The propblem of sequence order
18
Positional encoding
19
How to compute positional encoding
20
Feedforward layer
21
Add & norm layer
22
Deeper meaning of encoder components
23
Encoder step-by-step
24
Key takeaways
25
What next?
Description:
Dive into a comprehensive video lecture on transformer architectures, focusing on their application in generative music AI. Explore the intuition, theory, and mathematical formalization behind transformers, which have become dominant in deep learning across various fields. Gain insights into the encoder structure, self-attention mechanisms, multi-head attention, positional encoding, and feedforward layers. Follow along with step-by-step explanations of each component, including visual recaps and key takeaways. Enhance your understanding of this powerful deep learning architecture and its potential in audio and music processing.

Transformers Explained - Part 1: Generative Music AI

Valerio Velardo - The Sound of AI
Add to list
0:00 / 0:00