Главная
Study mode:
on
1
Intro
2
Deep Neural Networks
3
ReLU Networks
4
Architecture of Neural Networks
5
Structure of TW.L
6
Comparing T, with
7
Approximation Error
8
Approximation Classes
9
More general construction
10
Consequences
11
Extremes
12
Let us be careful
13
Manifold Approximation
14
Three Theorems
15
Covering
16
Last Thoughts
Description:
Explore the mathematics behind deep learning in this 47-minute conference talk on nonlinear approximation using deep ReLU networks. Delve into the architecture of neural networks, focusing on ReLU activation functions and their role in approximation theory. Examine the structure of TW.L, compare it with other approaches, and analyze approximation errors and classes. Investigate more general constructions and their consequences, including extremes and manifold approximation. Learn about three key theorems and covering techniques. Gain insights into cutting-edge advances in data science, bridging gaps between computational statistics, machine learning, optimization, information theory, and learning theory.

Nonlinear Approximation by Deep ReLU Networks - Ron DeVore, Texas A&M

Alan Turing Institute
Add to list
0:00 / 0:00