Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore the intricacies of Gaussian pre-activations in neural networks through this 45-minute conference talk by Pierre Wolinski at the Finnish Center for Artificial Intelligence. Delve into the construction of activation functions and initialization distributions that ensure Gaussian pre-activations throughout network depth, even in narrow neural networks. Examine the critical review of Edge of Chaos claims and discover a unified view on pre-activations propagation. Gain insights into information propagation in deep and narrow neural networks, comparing ReLU and tanh activation functions with Kaiming and Xavier initializations. Learn about the speaker's background in neural network pruning, Bayesian neural networks, and current research on information propagation during initialization and training.
Gaussian Pre-Activations in Neural Networks: Myth or Reality?