Главная
Study mode:
on
1
Intro
2
Big Neural Nets
3
Big Models Over-Fitting
4
Training with DropOut
5
DropOut/Connect Intuition
6
Theoretical Analysis of DropConnect
7
MNIST Results
8
Varying Size of Network
9
Varying Fraction Dropped
10
Comparison of Convergence Rates
11
Limitations of DropOut/Connect
12
Stochastic Pooling
13
Methods for Test Time
14
Varying Size of Training Set
15
Convergence / Over-Fitting
16
Street View House Numbers
17
Deconvolutional Networks
18
Recap: Sparse Coding (Patch-based)
19
Reversible Max Pooling
20
Single Layer Cost Function
21
Single Layer Inference
22
Effect of Sparsity
23
Effect of Pooling Variables
24
Talk Overview
25
Stacking the Layers
26
Two Layer Example
27
Link to Parts and Structure Models
28
Caltech 101 Experiments
29
Layer 2 Filters
30
Classification Results: Caltech 101
31
Deconvolutional + Convolutional
32
Summary
Description:
Explore a comprehensive guest lecture on regularization techniques for large neural networks delivered by Dr. Rob Fergus at the University of Central Florida. Delve into topics such as big neural nets, over-fitting, dropout and dropconnect methods, stochastic pooling, and deconvolutional networks. Learn about the theoretical analysis of these techniques, their limitations, and practical applications through experiments on datasets like MNIST, Street View House Numbers, and Caltech 101. Gain insights into convergence rates, the effects of network size and training set variations, and the link between deconvolutional networks and parts-and-structure models. Enhance your understanding of advanced deep learning concepts and their impact on computer vision tasks.

Regularization of Big Neural Networks

University of Central Florida
Add to list