Главная
Study mode:
on
1
Intro
2
Empirical Risk Minimization
3
The ERM/SRM theory of learning
4
Uniform laws of large numbers
5
Capacity control
6
U-shaped generalization curve
7
Does interpolation overfit?
8
Interpolation does not overfit even for very noisy data
9
why bounds fail
10
Interpolation is best practice for deep learning
11
Historical recognition
12
where we are now: the key lesson
13
Generalization theory for interpolation?
14
Interpolated k-NN schemes
15
Interpolation and adversarial examples
16
"Double descent" risk curve
17
Random Fourier networks
18
what is the mechanism?
19
Is infinite width optimal?
20
Smoothness by averaging
21
Double Descent in Random Feature settings
22
Framework for modern ML
23
The landscape of generalization
24
optimization: classical
25
The power of interpolation
26
Learning from deep learning: fast and effective kernel machines
27
Points and lessons
Description:
Explore a thought-provoking lecture on the evolution of machine learning theory, from classical statistics to modern deep learning approaches. Delve into key concepts such as Empirical Risk Minimization, uniform laws of large numbers, and capacity control. Examine the intriguing U-shaped generalization curve and challenge conventional wisdom about overfitting in interpolation. Discover why deep learning interpolation has become best practice and investigate the "double descent" risk curve phenomenon. Analyze random Fourier networks, interpolated k-NN schemes, and the relationship between interpolation and adversarial examples. Gain insights into the landscape of generalization, optimization techniques, and the power of interpolation in modern machine learning frameworks. Learn valuable lessons from deep learning applications and explore the potential of fast and effective kernel machines.

From Classical Statistics to Modern ML - The Lessons of Deep Learning - Mikhail Belkin

Institute for Advanced Study
Add to list
0:00 / 0:00