Главная
Study mode:
on
1
Intro
2
Scientific context
3
Parametric supervised machine learning
4
Convex optimization problems
5
Exponentially convergent SGD for smooth finite sums
6
Exponentially convergent SGD for finite sums
7
Convex optimization for machine learning
8
Theoretical analysis of deep learning
9
Optimization for multi-layer neural networks
10
Gradient descent for a single hidden layer
11
Optimization on measures
12
Many particle limit and global convergence (Chizat and Bach, 2018a)
13
Simple simulations with neural networks
14
From qualitative to quantitative results ?
15
Lazy training (Chizat and Bach, 2018)
16
From lazy training to neural tangent kernel
17
Are state-of-the-art neural networks in the lazy regime?
18
Is the neural tangent kernel useful in practice?
19
Can learning theory resist deep learning?
Description:
Explore the challenges of applying learning theory to deep learning in this 43-minute conference talk by Francis Bach from INRIA. Delve into recent results on global convergence of gradient descent for specific non-convex optimization problems, highlighting the difficulties and pitfalls encountered when analyzing deep learning algorithms. Examine the constant exchanges between theory and practice in machine learning, and investigate why these exchanges become more complex in the realm of deep learning. Gain insights into the intersection of statistics and computer science in modern machine learning, covering topics such as parametric supervised learning, convex optimization, stochastic gradient descent, and the theoretical analysis of deep neural networks.

Can Learning Theory Resist Deep Learning? - Francis Bach, INRIA

Alan Turing Institute
Add to list
0:00 / 0:00