Главная
Study mode:
on
1
Intro
2
Main Reference
3
Self-Training
4
Self-Distillation [Deep Learning]
5
Self-Distillation More Profound
6
Learning Functions in Hilbert Space
7
Unconstrained Form
8
Intuition
9
Closed Form Solution
10
Connections
11
Challenges
12
Power Iteration Analogy
13
Capacity Control
14
Generalization Guarantees
15
Revisiting Illustrative Example
16
Advantage of Near Interpolation
17
Early Stopping
18
Deep Learning Experiments
19
Open Problems
Description:
Explore the concept of self-training and self-distillation in machine learning through this 44-minute lecture by Hossein Mobahi from Google Research. Delve into the surprising phenomenon where retraining models using their own predictions can lead to improved generalization performance. Examine the regularization effects induced by this process and their amplification through multiple rounds of retraining. Investigate the rigorous characterization of these effects in Hilbert space learning, and its relation to infinite-width neural networks. Cover topics such as unconstrained form, closed-form solutions, power iteration analogy, capacity control, and generalization guarantees. Analyze deep learning experiments and discuss open problems in the field of self-training and self-distillation.

Improving Generalization by Self-Training & Self Distillation

MITCBMM
Add to list
0:00 / 0:00