Explore the foundations of inverse problems and machine learning in this 48-minute conference talk from the Alan Turing Institute. Delve into the "mother of all representer theorems" as presented by Michael Unser. Begin with an introduction to the variational formulation of inverse problems and the concept of learning as a linear inverse problem. Examine the Reproducing Kernel Hilbert Space (RKHS) representer theorem for machine learning before exploring the possibility of a unifying representer theorem. Investigate Banach spaces, their duals, and the generalization of duality mapping. Learn about kernel methods in machine learning, Tikhonov regularization, and the qualitative effects of Banach conjugation. Analyze sparsity-promoting regularization, extreme points, and the geometry of l2 vs. l1 minimization. Discover the isometry with the space of Radon measures and explore sparse kernel expansions, including the special case of translation-invariant kernels. Compare RKHS with sparse kernel expansions in the context of linear shift-invariant (LSI) systems. Gain valuable insights into the mathematical foundations underlying modern data science and machine learning techniques.
Read more
The Mother of All Representer Theorems for Inverse Problems and Machine Learning - Michael Unser