Model Architecture: Encoding Higher-Order Structures
7
Model Architecture: Patching, Alignment and Concatenation
8
Model Architecture: Block Recurrent Transformer
9
Evaluation
Description:
Explore dynamic graph representation learning with efficient transformers in this conference talk from the Second Learning on Graphs Conference (LoG'23). Dive into the HOT model, which enhances link prediction by leveraging higher-order graph structures. Discover how k-hop neighbors and subgraphs are encoded into the attention matrix of transformers to improve accuracy. Learn about the challenges of increased memory pressure and the innovative solutions using hierarchical attention matrices. Examine the model's architecture, including encoding higher-order structures, patching, alignment, concatenation, and the block recurrent transformer. Compare HOT's performance against other dynamic graph representation learning schemes and understand its potential applications in various dynamic graph learning workloads.
HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers
Scalable Parallel Computing Lab, SPCL @ ETH Zurich