Главная
Study mode:
on
1
Intro
2
WORKHORSE OF MACHINE LEARNIN
3
IN DEEP LEARNING
4
STRATEGY
5
WHY THIS MODEL?
6
ESTIMATORS
7
GRADIENT-BASED ALGORITHMS
8
DYNAMICAL MEAN FIELD THEORY
9
LANGEVIN STATE EVOLUTION (NUMERICAL SOLUTION)
10
LANGEVIN PHASE DIAGRAM
11
GRADIENT-FLOW PHASE DIAGRAM
12
POPULAR "EXPLANATION"
13
SPURIOUS MINIMA DO NOT NECESSARILY CAUSE GF TO FAIL
14
WHAT IS GOING ON?
15
TRANSITION RECIPE
16
TRANSITION CONJECTUR
17
LANDSCAPE ANALYSIS
18
CONCLUSION ON SPIKED MATRIX-TENSOR MODEL
19
TEACHER-NEURAL SETTING
20
TEACHER STUDENT PERCEPTRON
21
PHASE RETRIEVAL: OPTIMAL SOLUTION
22
GRADIENT DESCENT FOR PHASE RETRIEVAI
23
PERFORMANCE OF GRADIENT DESCENT
24
GRADIENT DESCENT NUMERICALLY
25
TOWARDS A THEORY
26
OVER-PARAMETRISED LANDSPACE
27
STOCHASTIC GRADIENT DESCENT
28
DYNAMICAL MEAN-FIELD THEOR Mignaco, Urbani, Krzakala, LZ, 2006.06098
29
DMFT FOLLOWS THE WHOLE TRAJECTORY
Description:
Explore gradient-based algorithms in high-dimensional learning through this Richard M. Karp Distinguished Lecture. Delve into the analysis of gradient descent algorithms and their noisy variants in nonconvex settings. Examine several high-dimensional statistical learning problems where gradient-based algorithm performance can be analyzed precisely. Discover how statistical physics provides exact closed solutions for algorithm performance in the high-dimensional limit. Cover topics including the spiked mixed matrix-tensor model, perceptron, and phase retrieval. Gain insights into dynamical mean-field theory, Langevin dynamics, and stochastic gradient descent. Investigate phase diagrams, landscape analysis, and the teacher-student perceptron model. Understand the behavior of gradient descent in phase retrieval and explore theories for over-parameterized landscapes.

Insights on Gradient-Based Algorithms in High-Dimensional Learning

Simons Institute
Add to list
0:00 / 0:00