Главная
Study mode:
on
1
Date & Time: Tuesday, 12 February,
2
Date & Time: Tuesday, 12 February,
3
Date & Time: Wednesday, 13 February,
4
Start
5
Toward theoretical understanding of deep learning
6
Machine learning ML: A new kind of science
7
Recap:
8
Training via Gradient Descent "natural algorithm"
9
Subcase: deep learning*
10
Brief history: networks of "artificial neurons"
11
Some questions
12
Part 1: Why overparameterization and/or overprovisioning?
13
Overprovisioning may help optimization part 1: a folklore experiment
14
Overprovisioning can help part 2: Allowing more
15
Acceleration effect of increasing depth
16
But textbooks warn us: Larger models can "Overfit"
17
Popular belief/conjecture
18
Noise stability: understanding one layer no nonlinearity
19
Proof sketch : Noise stability -deep net can be made low-dimensional
20
The Quantitative Bound
21
Correlation with Generalization qualitative check
22
Concluding thoughts on generalization
23
Part 2: Optimization in deep learning
24
Basic concepts
25
Curse of dimensionality
26
Gradient descent in unknown landscape.
27
Gradient descent in unknown landscape contd.
28
Evading saddle points..
29
Active area: Landscape Analysis
30
New trend: Trajectory Analysis
31
Trajectory Analysis contd
32
Unsupervised learning motivation: "Manifold assumption"
33
Unsupervised learning Motivation: "Manifold assumption" contd
34
Deep generative models
35
Generative Adversarial Nets GANs [Goodfellow et al. 2014]
36
What spoils a GANs trainer's day: Mode Collapse
37
Empirically detecting mode collapse Birthday Paradox Test
38
Estimated support size from well-known GANs
39
To wrap up....What to work on suggestions for theorists
40
Concluding thoughts
41
Advertisements
42
Q&A
Description:
Explore the theoretical foundations of deep learning in this comprehensive lecture by Sanjeev Arora from Princeton University and the Institute for Advanced Study. Delve into the mathematics behind machine learning, focusing on supervised and unsupervised learning techniques. Examine the challenges of overparameterization, optimization, and generalization in deep neural networks. Investigate landscape analysis, trajectory analysis, and the manifold assumption in unsupervised learning. Learn about deep generative models, including Generative Adversarial Networks (GANs), and their associated challenges like mode collapse. Gain insights into cutting-edge research directions and potential areas for theoretical exploration in the field of deep learning.

Toward Theoretical Understanding of Deep Learning - Lecture 2

International Centre for Theoretical Sciences
Add to list