Главная
Study mode:
on
1
Intro
2
The Dawn of Deep Learning
3
Impact of Deep Learning on Mathematical Problems
4
Numerical Results
5
Graph Convolutional Neural Networks Graph convolutional neural networks
6
Two Approaches to Convolution on Graphs
7
Spectral Graph Convolution
8
Spectral Filtering using Functional Calculus
9
Graphs Modeling the Same Phenomenon
10
Comparing the Repercussion of a Filter on Two Graphs
11
Transferability of Functional Calculus Filters
12
Rethinking Transferability
13
Fundamental Questions concerning Deep Neural Networks
14
General Problem Setting
15
What is Relevance?
16
The Relevance Mapping Problem
17
Rate-Distortion Viewpoint
18
Problem Relaxation
19
Observations
20
MNIST Experiment
Description:
Explore a seminar on theoretical machine learning that delves into the understanding of deep neural networks, focusing on generalization and interpretability. Gain insights from Gitta Kutyniok of Technische Universität Berlin as she discusses the dawn of deep learning, its impact on mathematical problems, and numerical results. Examine graph convolutional neural networks, including two approaches to convolution on graphs and spectral graph convolution. Investigate spectral filtering using functional calculus and compare the repercussion of filters on different graphs. Analyze the transferability of functional calculus filters and rethink transferability concepts. Address fundamental questions concerning deep neural networks, exploring the general problem setting, relevance mapping, and rate-distortion viewpoint. Conclude with observations from an MNIST experiment, providing a comprehensive overview of current research in deep learning theory and applications.

Understanding Deep Neural Networks - From Generalization to Interpretability

Institute for Advanced Study
Add to list