Главная
Study mode:
on
1
​ - Introduction
2
​ - Sequence modeling
3
​ - Neurons with recurrence
4
- Recurrent neural networks
5
​ - RNN intuition
6
​ - Unfolding RNNs
7
- RNNs from scratch
8
- Design criteria for sequential modeling
9
- Word prediction example
10
​ - Backpropagation through time
11
- Gradient issues
12
​ - Long short term memory LSTM
13
​ - RNN applications
14
- Attention fundamentals
15
- Intuition of attention
16
- Attention and search relationship
17
- Learning attention with neural networks
18
- Scaling attention and applications
19
- Summary
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore the fundamentals of recurrent neural networks and transformers in this comprehensive lecture from MIT's Introduction to Deep Learning course. Delve into sequence modeling, neurons with recurrence, and the intuition behind RNNs. Learn how to unfold RNNs, build them from scratch, and understand their design criteria for sequential modeling. Examine word prediction examples and backpropagation through time, while addressing gradient issues. Discover long short-term memory (LSTM) and various RNN applications. Investigate attention mechanisms, their intuition, and relationship to search. Gain insights into learning attention with neural networks, scaling attention, and its applications. Conclude with a summary of key concepts in this 58-minute lecture delivered by Ava Soleimany, offering a solid foundation in advanced deep learning techniques.

Recurrent Neural Networks and Transformers

Alexander Amini
Add to list