Главная
Study mode:
on
1
- Intro
2
How do Language Models Encode Code
3
- Sinusoidal Encodings
4
- Signal Processing: DFT
5
- Graph Fourier Basis
6
- Magnetic Laplacian
7
- Harmonics for Directed Graphs
8
- Ambiguity of Eigenvectors
9
- Architecture
10
- Distance Prediction
11
- Correctness Prediction of Sorting Networks
12
- OpenGraphBenchmark Code 2
13
- Summary
14
- Q+A
Description:
Explore the application of transformers to directed graphs in this comprehensive conference talk by Simon Geisler from Valence Labs. Dive into direction- and structure-aware positional encodings for directed graphs, including eigenvectors of the Magnetic Laplacian and directional random walk encodings. Learn how these techniques can be applied to domains such as source code and logic circuits. Discover the benefits of incorporating directionality information in various downstream tasks, including correctness testing of sorting networks and source code understanding. Examine the data-flow-centric graph construction approach that outperforms previous state-of-the-art methods on the Open Graph Benchmark Code2. Follow along as the speaker covers topics like sinusoidal encodings, signal processing, Graph Fourier Basis, harmonics for directed graphs, and the architecture of the proposed model.

Transformers Meet Directed Graphs - Exploring Direction-Aware Positional Encodings

Valence Labs
Add to list
0:00 / 0:00