Recovery with sparsity constraints: discretization
11
Structure of iterative reconstruction algorithm
12
Connection with deep neural networks
13
Deep neural networks and splines
14
Feedforward deep neural network
15
CPWL functions in high dimensions
16
Algebra of CPWL functions
17
Implication for deep ReLU neural networks
18
CPWL functions: further properties
19
Constraining activation functions
20
Representer theorem for deep neural networks
21
Outcome of representer theorem
22
Optimality results
23
Deep spline networks: Discussion
24
Deep spline networks (Cont'd)
Description:
Explore a comprehensive lecture on splines and imaging, covering the journey from compressed sensing to deep neural networks. Delve into the optimality of splines for solving inverse problems in imaging and designing deep neural networks. Examine a representer theorem that demonstrates how extremal points of linear inverse problems with generalized total-variation regularization are adaptive splines. Discover the connection between continuous-domain solutions and compressed sensing algorithms. Investigate the application of the theorem to optimize activation shapes in deep neural networks, leading to the concept of "optimal" deep-spline networks with piecewise-linear spline activations. Gain insights into the variational justification of ReLU architecture and explore new computational challenges in determining optimal activations. Learn about the structure of iterative reconstruction algorithms, the algebra of CPWL functions, and the implications for deep ReLU neural networks.
Splines and Imaging - From Compressed Sensing to Deep Neural Networks