Главная
Study mode:
on
1
Intro
2
Decoder intuition
3
Decoder input
4
Decoder block
5
Training / inference discrepancy
6
Masked multi-head attention
7
Add & norm
8
Multi-head attention
9
Feedforward
10
Decoder block
11
Linear & softmax
12
Decoder step-by-step
13
Training a transformer
14
Music generation with transformers
15
Valerio's music generation transformer routine
16
Music data is key
17
Pros and cons
18
Most promising research
19
Key takeaways
20
What's up next?
Description:
Dive deep into the world of transformers and their application in generative music AI in this comprehensive video lecture. Explore the intuition, theory, and mathematics behind transformers, focusing on the decoder component and its various sublayers, including masked multi-head attention. Learn how to leverage transformers for music generation, with practical tips and tricks from industry experience. Discover the importance of music representation and data in the generation process, and gain insights into future research directions in neuro-symbolic integration for more robust music generation. Access accompanying lecture slides and join a community discussion to further enhance your understanding of this cutting-edge technology in AI-driven music creation.

Transformers for Generative Music AI - Part 2: Decoder and Music Generation

Valerio Velardo - The Sound of AI
Add to list
0:00 / 0:00