Главная
Study mode:
on
1
Intro
2
What is variational
3
Gradientbased optimization
4
Covariant operator
5
Discretization
6
Summary
7
Gradient Flow
8
Hamiltonian Formulation
9
Gradient Descent
10
Diffusions
11
Assumptions
12
Gradient Descent Structure
13
Avoiding Saddle Points
14
Differential geometry
15
Nonconvex optimization
16
Stochastic gradient control
Description:
Explore gradient-based optimization techniques in this lecture by Michael Jordan from UC Berkeley. Delve into accelerated, distributed, asynchronous, and stochastic methods for machine learning optimization. Learn about variational approaches, covariant operators, discretization, gradient flow, Hamiltonian formulation, and gradient descent structures. Discover strategies for avoiding saddle points, understand the role of differential geometry in nonconvex optimization, and gain insights into stochastic gradient control. Enhance your understanding of computational challenges in machine learning through this comprehensive exploration of advanced optimization concepts.

On Gradient-Based Optimization - Accelerated, Distributed, Asynchronous and Stochastic

Simons Institute
Add to list