Главная
Study mode:
on
1
Introduction
2
Feedforward neural networks
3
Studying the expressivity of DNNS
4
Example: the ReLU activation function
5
ReLU networks
6
Universal approximation property
7
Why sparsely connected networks?
8
Same sparsity - various network shapes
9
Approximation with sparse networks
10
Direct vs inverse estimate
11
Notion of approximation space
12
Role of skip-connections
13
Counting neurons vs connections
14
Role of activation function 0
15
The case of spline activation functions Theorem 2
16
Guidelines to choose an activation ?
17
Rescaling equivalence with the ReLU
18
Benefits of depth ?
19
Role of depth
20
Set theoretic picture
21
Summary: Approximation with DNNS
22
Overall summary & perspectives
Description:
Explore the intricacies of deep network approximation in this 50-minute conference talk by Remi Gribonval from Inria, presented at the Alan Turing Institute. Delve into the mathematics of data science, bridging computational statistics, machine learning, optimization, information theory, and learning theory. Begin with an introduction to feedforward neural networks and study the expressivity of deep neural networks (DNNs). Examine the ReLU activation function and its universal approximation properties. Investigate the benefits of sparsely connected networks and various network shapes. Compare direct and inverse estimates in approximation with sparse networks. Understand the role of skip-connections, neuron count vs. connections, and activation functions. Analyze spline activation functions and guidelines for choosing appropriate activations. Explore the benefits of network depth and its impact on approximation capabilities. Conclude with a comprehensive summary of approximation with DNNs and future perspectives in this cutting-edge field. Read more

Approximation with Deep Networks - Remi Gribonval, Inria

Alan Turing Institute
Add to list
0:00 / 0:00