Главная
Study mode:
on
1
Intro
2
Neural Networks are Large
3
Background: Network Pruning
4
Training is Expensive
5
Research Question
6
Motivation and Questions
7
Training Pruned Networks
8
Iterative Magnitude Pruning
9
Results
10
The Lottery Ticket Hypothesis
11
Broader Questions
12
Larger-Scale Settings
13
Scalability Challenges
14
Linear Mode Connectivity
15
Instability
16
Rewinding IMP Works
17
Takeaways
18
Our Current Understanding
19
Implications and Follow-Up
Description:
Explore the groundbreaking "Lottery Ticket Hypothesis" in neural network pruning through this seminar by Michael Carbin from MIT. Delve into techniques for reducing parameter counts in trained networks by over 90% without compromising accuracy. Discover how iterative magnitude pruning uncovers subnetworks capable of effective training from early stages. Learn about the potential for more efficient machine learning methods, including inference, fine-tuning pre-trained networks, and sparse training. Gain insights into the semantics, design, and implementation of systems operating under uncertainty in environment, implementation, or execution. Follow the journey from background on network pruning to current understanding and implications for future research in this comprehensive exploration of sparse, trainable neural networks.

The Lottery Ticket Hypothesis - Michael Carbin

Massachusetts Institute of Technology
Add to list