Главная
Study mode:
on
1
Intro
2
Based on joint work with
3
Sparsity & frugality
4
Sparsity & interpretability
5
Deep sparsity?
6
Bilinear sparsity: blind deconvolution
7
ReLU network training - weight decay
8
Behind the scene
9
Greed is good?
10
Optimization with support constraints
11
Application: butterfly factorization
12
Wandering in equivalence classes
13
Other consequences of scale-invariance
14
Conservation laws
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore the depths of sparsity in neural networks through this 37-minute conference talk by Remi Gribonval from INRIA, hosted by the Institut des Hautes Etudes Scientifiques (IHES). Delve into the natural promotion of sparse connections in neural networks for complexity control and potential interpretability guarantees. Compare classical sparse regularization for inverse problems with multilayer sparse approximation. Discover the role of rescaling-invariances in deep parameterizations, their advantages and challenges. Learn about life beyond gradient descent, including an algorithm that significantly speeds up learning of certain fast transforms via multilayer sparse factorization. Cover topics such as bilinear sparsity, blind deconvolution, ReLU network training with weight decay, optimization with support constraints, butterfly factorization, and the consequences of scale-invariance in neural networks.

Rapture of the Deep: Highs and Lows of Sparsity in Neural Networks

Institut des Hautes Etudes Scientifiques (IHES)
Add to list
0:00 / 0:00