Главная
Study mode:
on
1
Introduction
2
Why is deep learning so popular
3
Why does deep learning not work
4
Supervised learning
5
Stochastic gradient descent
6
Local optimization
7
Prediction error
8
What we converge to
9
Implicit Regularization
10
Stochastic Mirror Descent
11
Bregman Divergence
12
Stochastic Mirror Descent Algorithm
13
Conventional Neural Networks
14
SMD
15
Summary
16
Nonlinear models
17
Blessing of dimensionality
18
Distribution of weights
19
Explicit regularization
20
Blessings of dimensionality
Description:
Explore the theoretical underpinnings of deep learning in this 37-minute lecture by Babak Hassibi from the California Institute of Technology. Delve into the success of deep neural networks, focusing on the crucial role of stochastic descent methods in achieving good solutions that generalize well. Connect learning algorithms like stochastic gradient descent (SGD) and stochastic mirror descent (SMD) to H-infinity control, explaining their convergence and implicit regularization behavior in over-parameterized scenarios. Gain insights into the "blessing of dimensionality" phenomenon and learn about a new algorithm, regularized SMD (RSMD), which offers superior generalization performance for noisy datasets. Examine topics such as supervised learning, local optimization, prediction error, Bregman divergence, and the distribution of weights in neural networks.

Implicit and Explicit Regularization in Deep Neural Networks

Institute for Pure & Applied Mathematics (IPAM)
Add to list