Главная
Study mode:
on
1
Intro
2
Presentation Outline
3
Universal Approximation Theorem
4
L2 Regularization - "Weight Decay"
5
Dropout
6
Data Augmentation
7
Rondomization Tests
8
Results of Randomization Tests
9
Conclusions & Impications
10
Explicit Regularization Tests
11
Implicit Regularization Findings
12
Finite-Sample Expressivity of Neural Networks
13
Appeal to Linear Models
14
Investigating SGD
15
Final Conclusions
Description:
Explore the intricacies of deep learning and challenge conventional wisdom on generalization in this 40-minute lecture from the University of Central Florida. Delve into topics such as the Universal Approximation Theorem, L2 Regularization, Dropout, and Data Augmentation. Examine randomization tests and their results, leading to thought-provoking conclusions and implications. Investigate explicit and implicit regularization techniques, finite-sample expressivity of neural networks, and draw comparisons to linear models. Conclude by analyzing the role of Stochastic Gradient Descent (SGD) in deep learning, ultimately reshaping your understanding of generalization in neural networks.

Understanding Deep Learning Requires Rethinking Generalization

University of Central Florida
Add to list