Главная
Study mode:
on
1
– Motivation for reasoning & planning
2
– Inference through energy minimization
3
– Disclaimer
4
– Planning through energy minimization
5
– Q&A Optimal control diagram
6
– Differentiable associative memory and attention
7
– Transformers
8
– Q&A Other differentiable attention architectures
9
– Transformer architecture
10
– Transformer applications: 1. Multilingual transformer Architecture XML-R
11
– 2. Supervised symbol manipulation
12
– 3. NL understanding & generation
13
– 4. DETR
14
– Planing through optimal control
15
– Conclusion
Description:
Explore a comprehensive lecture on differentiable associative memories, attention mechanisms, and transformers delivered by renowned speaker Yann LeCun. Delve into the motivation behind reasoning and planning, learn about inference through energy minimization, and understand the concept of planning via energy minimization. Discover the intricacies of differentiable associative memory and attention, followed by an in-depth look at transformer architectures and their various applications. Examine specific use cases including multilingual transformers, supervised symbol manipulation, natural language understanding and generation, and DETR (DEtection TRansformer). Conclude with insights on planning through optimal control, gaining a thorough understanding of these advanced machine learning concepts and their practical implementations.

Differentiable Associative Memories, Attention, and Transformers

Alfredo Canziani
Add to list