Главная
Study mode:
on
1
Intro
2
FNN, CNN and RNN architectures
3
CNN generative models
4
Minimax analysis
5
Estimator and Assumptions
6
Main results (Informal)
7
Related work
8
Upper bounds (formal)
9
Proof sketch
10
Lower bounds (formal)
11
Experiments - CNN (average pooling) vs FNN
12
Experiments - CNN (weighted pooling) vs FNN
13
Open questions
Description:
Explore the sample-complexity of estimating convolutional and recurrent neural networks in this 35-minute lecture by Aarti Singh from Carnegie Mellon University. Delve into the Frontiers of Deep Learning as part of the Simons Institute series. Gain insights into FNN, CNN, and RNN architectures, CNN generative models, and minimax analysis. Examine estimator assumptions and main results, followed by a discussion on related work and formal upper and lower bounds. Understand the proof sketch and analyze experiments comparing CNN (with average and weighted pooling) to FNN. Conclude by considering open questions in the field, enhancing your understanding of deep learning complexities and estimation challenges.

Sample-Complexity of Estimating Convolutional and Recurrent Neural Networks

Simons Institute
Add to list