Главная
Study mode:
on
1
Intro
2
Deep Learning: Theory vs Practice
3
Composition is Dynamics
4
Supervised Learning
5
The Problem of Approximation
6
Example: Approximation by Trigonometric Polynomials
7
The Continuum Idealization of Residual Networks
8
How do dynamics approximate functions?
9
Universal Approximation by Dynamics
10
Approximation of Symmetric Functions by Dynamical Hypothesis Spac
11
Sequence Modelling Applications
12
DL Architectures for Sequence Modelling
13
Modelling Static vs Dynamic Relationships
14
An Approximation Theory for Sequence Modelling
15
The Recurrent Neural Network Hypothesis Space
16
The Linear RNN Hypothesis Space
17
Properties of Linear RNN Hypothesis Space
18
Approximation Guarantee (Density)
19
Smoothness and Memory
20
Insights on the (Linear) RNN Hypothesis Space
21
Convolutional Architectures
22
Encoder-Decoder Architectures
23
Extending the RNN Analysis
Description:
Explore a 45-minute lecture on the approximation theory of deep learning from a dynamical perspective, delivered by Qianxiao Li from the National University of Singapore. Delve into the intersection of machine learning and dynamical systems, examining how composition in deep learning can be viewed as dynamics. Investigate supervised learning, the problem of approximation, and universal approximation by dynamics. Analyze sequence modeling applications, including deep learning architectures for sequence modeling and the approximation theory for sequence modeling. Examine recurrent neural network hypothesis spaces, their properties, and approximation guarantees. Gain insights into convolutional and encoder-decoder architectures, and discover how the analysis of recurrent neural networks can be extended to these structures.

Approximation Theory of Deep Learning from the Dynamical Viewpoint

Fields Institute
Add to list
0:00 / 0:00