Главная
Study mode:
on
1
​ - Introduction
2
​ - Sequence modeling
3
​ - Neurons with recurrence
4
​ - Recurrent neural networks
5
​ - RNN intuition
6
​ - Unfolding RNNs
7
- RNNs from scratch
8
- Design criteria for sequential modelling
9
- Word prediction example
10
​ - Backpropagation through time
11
​ - Gradient issues
12
​ - Long short term memory LSTM
13
​ - RNN applications
14
​ - Attention
15
​ - Summary
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Dive into the world of Recurrent Neural Networks in this comprehensive lecture from MIT's Introduction to Deep Learning course. Explore sequence modeling, neurons with recurrence, and the intuition behind RNNs. Learn how to unfold RNNs, build them from scratch, and understand the design criteria for sequential modeling. Discover practical applications through a word prediction example, and delve into advanced concepts like backpropagation through time and gradient issues. Gain insights into Long Short-Term Memory (LSTM) networks, various RNN applications, and the powerful attention mechanism. By the end of this hour-long session, you'll have a solid foundation in RNNs and their role in deep learning.

MIT 6.S191 - Recurrent Neural Networks

Alexander Amini
Add to list