Главная
Study mode:
on
1
Intro
2
Machine Learning in one picture
3
Machine Learning with Graph Data: Applicat
4
Outline
5
GNNS: Origins and Relations
6
Message Passing Graph Neural Networks
7
Message Passing for Node Embedding
8
Fully connected Neural Network (FNN)
9
Message Passing Tree
10
Function Approximation and Graph Distincti
11
Color refinement/Weisfeiler-Leman algorith
12
Improving discriminative power
13
Node IDs and Local Algorithms
14
The challenge with generalization
15
Bounding the generalization gap
16
Neural Tangent Kernel
17
Computational structure
18
Algorithmic Alignment
19
Big picture: when may extrapolation "work"?
20
Extrapolation in fully connected ReLU netwo
21
Implications for the full GNN
22
Open Questions...
Description:
Explore a comprehensive lecture on Graph Neural Networks (GNNs) that delves into their theoretical foundations, representation capabilities, and learning properties. Gain insights into the approximation and learning characteristics of message passing GNNs and higher-order GNNs, with a focus on function approximation, estimation, generalization, and extrapolation. Discover connections between GNNs and graph isomorphism, equivariant functions, local algorithms, and dynamic programming. Examine the challenges and potential solutions for improving discriminative power, generalization, and extrapolation in GNNs. Analyze the computational structure and algorithmic alignment of these models, and consider open questions in the field. Enhance your understanding of GNNs' applications in machine learning tasks involving nodes, graphs, and point configurations.

Theory of Graph Neural Networks: Representation and Learning

International Mathematical Union
Add to list
0:00 / 0:00