Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore the depths of sparsity in neural networks through this 37-minute conference talk by Remi Gribonval from INRIA, hosted by the Institut des Hautes Etudes Scientifiques (IHES). Delve into the natural promotion of sparse connections in neural networks for complexity control and potential interpretability guarantees. Compare classical sparse regularization for inverse problems with multilayer sparse approximation. Discover the role of rescaling-invariances in deep parameterizations, their advantages and challenges. Learn about life beyond gradient descent, including an algorithm that significantly speeds up learning of certain fast transforms via multilayer sparse factorization. Cover topics such as bilinear sparsity, blind deconvolution, ReLU network training with weight decay, optimization with support constraints, butterfly factorization, and the consequences of scale-invariance in neural networks.
Rapture of the Deep: Highs and Lows of Sparsity in Neural Networks