- Message Passing mirrors forward and backward propagation
6
- How to deal with different output shapes
7
- Differentiable Normalization
8
- Virtual Residual Edges
9
- Meta-Batching
10
- Experimental Results
11
- Fine-Tuning experiments
12
- Public reception of the paper
13
- At , Boris mentions that they train the first variant, yet on closer examination, we decided it's more like the second
Description:
Explore a groundbreaking approach to deep learning in this 48-minute video featuring first author Boris Knyazev. Dive into the concept of parameter prediction for unseen deep architectures, where a Graph-Hypernetwork is trained to predict high-performing weights for novel network architectures without traditional training. Learn about the DeepNets-1M dataset, the training process for the Hypernetwork, and the use of Graph Neural Networks. Discover how message passing mirrors forward and backward propagation, techniques for handling different output shapes, and the implementation of differentiable normalization and virtual residual edges. Examine experimental results, fine-tuning experiments, and the paper's public reception. Gain insights into a potentially more computationally efficient paradigm of training neural networks and its implications for the future of deep learning.
Parameter Prediction for Unseen Deep Architectures - With First Author Boris Knyazev