Главная
Study mode:
on
1
Introduction
2
Speaker Introduction
3
Overview
4
Neural Networks
5
The Curse of Dimensionality
6
Theory
7
Main Question
8
Manifold Learning Community
9
Reach of a Manifold
10
Linear Regression
11
Approximation Theory
12
Classification
13
Excess Risk
14
Recent Work
15
Chart Auto Encoders
16
Neural Network Construction
17
Linear Encoders
18
Clustered Data
19
Questions
20
Conclusion
21
Hybrid Seminar
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore the intricacies of deep learning networks and their ability to adapt to intrinsic dimensionality in this seminar by Alexander Cloninger from UC San Diego. Delve into the central question of network size requirements for function approximation and how data dimensionality impacts learning. Examine ReLU networks' approximation capabilities for functions with dimensionality-reducing feature maps, focusing on projections onto low-dimensional submanifolds and distances to low-dimensional sets. Discover how deep nets remain faithful to an intrinsic dimension governed by the function rather than domain complexity. Investigate connections to two-sample testing, manifold autoencoders, and data generation. Learn about Dr. Cloninger's research in geometric data analysis and applied harmonic analysis, exploring applications in imaging, medicine, and artificial intelligence.

Networks that Adapt to Intrinsic Dimensionality Beyond the Domain

Inside Livermore Lab
Add to list