Главная
Study mode:
on
1
Intro
2
Models can do more than sample data
3
Model-based control works on real systems, despite modeling errors
4
Inverse dynamics for control
5
Acceleration-based Direct Optimization (ADO)
6
Example: 2D hopper
7
Combining trajectory optimization and function approximation
8
Analytical policy gradient
9
Optimization vs. discovery
10
Discovery is usually done by humans
11
Automated discovery with Contact Invariant Optimization (CIO)
Description:
Explore model-based control of physical systems in this 49-minute lecture from the 2019 ADSI Summer Workshop on Algorithmic Foundations of Learning and Control. Delve into Emo Todorov's presentation, which covers the power of models beyond data sampling, the effectiveness of model-based control on real systems despite modeling errors, and inverse dynamics for control. Learn about Acceleration-based Direct Optimization (ADO) with a 2D hopper example, and understand the combination of trajectory optimization and function approximation. Examine analytical policy gradient and the distinction between optimization and discovery, with emphasis on automated discovery using Contact Invariant Optimization (CIO). Gain insights into the human role in discovery and the potential for automation in this field.

ADSI Summer Workshop- Algorithmic Foundations of Learning and Control

Paul G. Allen School
Add to list