Explore gradient-based algorithms in high-dimensional learning through this Richard M. Karp Distinguished Lecture. Delve into the analysis of gradient descent algorithms and their noisy variants in nonconvex settings. Examine several high-dimensional statistical learning problems where gradient-based algorithm performance can be analyzed precisely. Discover how statistical physics provides exact closed solutions for algorithm performance in the high-dimensional limit. Cover topics including the spiked mixed matrix-tensor model, perceptron, and phase retrieval. Gain insights into dynamical mean-field theory, Langevin dynamics, and stochastic gradient descent. Investigate phase diagrams, landscape analysis, and the teacher-student perceptron model. Understand the behavior of gradient descent in phase retrieval and explore theories for over-parameterized landscapes.
Insights on Gradient-Based Algorithms in High-Dimensional Learning