Главная
Study mode:
on
1
Intro
2
Interfaces Between Users and Optimizers?
3
Optimization in Machine Learning: New Interfaces?
4
Possible Paradigm for Optimization Theory in ML?
5
This Talk: New Objective for Learning One-hidden-layer Neural Networks
6
The Straightforward Objective Fails
7
An Analytic Formula
8
Provable Non-convex Optimization Algorithms?
9
Conclusion
Description:
Explore the intricacies of learning one-hidden-layer neural networks through landscape design in this 32-minute conference talk by Tengyu Ma from Stanford University. Delve into the challenges of optimization in machine learning and discover a new objective for training neural networks. Examine why straightforward objectives fail and learn about an innovative analytic formula. Investigate provable non-convex optimization algorithms and gain insights into potential paradigms for optimization theory in machine learning. This Simons Institute presentation, part of the "Optimization, Statistics and Uncertainty" series, offers a deep dive into the interfaces between users and optimizers, providing valuable knowledge for researchers and practitioners in the field of neural networks and machine learning optimization.

Learning One-Hidden-Layer Neural Networks with Landscape Design

Simons Institute
Add to list