Limited performance for smooth, univariate approximation
9
Balancing architecture size
10
Smooth, multivariate functions
11
Piecewise smooth function approximation
12
Theoretical insights
13
DNN existence theory for holomorphic functions
14
Practical DNN existence theorem: Hilbert-valued case
15
Discussion
16
Deep learning for inverse problems
17
Further examples
18
These are not rare events
19
Unpredictable generalization
20
The universal instability theorem
21
Hallucinations in practice
22
Construction: unravelling and restarts
23
FIRENETS example
24
Conclusions
Description:
Explore deep learning applications in scientific computing through two case studies highlighting the gap between theory and practice. Delve into high-dimensional function approximation and inverse problems for imaging, examining limitations of current approaches in stability, generalization, and practical performance. Discover recent theoretical advancements demonstrating the potential of deep neural networks to match best-in-class schemes in both settings. Gain insights into achieving robust, reliable, and improved practical performance in scientific computing using deep learning techniques. Learn about challenges in parametric modeling, limited performance for smooth function approximation, and unpredictable generalization in inverse problems. Understand the universal instability theorem and its implications for deep learning in scientific applications.
Deep Learning for Scientific Computing - Two Stories on the Gap Between Theory & Practice - Ben Adcock