Главная
Study mode:
on
1
Introduction
2
Backflips
3
Models
4
Experiments
5
Perception
6
Types of Models
7
Q Functions
8
Deep Models
9
Data vs Physics
10
State Space
11
Use Cases
12
Online Optimization
13
State Estimation
14
Physics Models
15
Lagrangian State Parameters
16
Gradientbased Policy Search
17
L4DC
18
Static Output Feedback
19
OverParameterization
20
Eula Parameters
21
Lessons from Roboticists
22
The 2D Hopper
23
Rare Event Simulation
24
Traditional Approach
25
Failure Cases
26
Hopping Robot
27
Manipulation Notes
28
planar gripper
29
robot simulators
30
how to run a robot
31
occupation measures
32
convergence
33
language and state
34
linear models
Description:
Explore challenging problems in robotics through this lecture from the Theory of Reinforcement Learning Boot Camp. Delve into topics such as backflips, models, experiments, perception, and Q functions. Examine deep models, data vs physics approaches, state space, and use cases for online optimization and state estimation. Investigate physics models, Lagrangian state parameters, gradient-based policy search, and static output feedback. Learn about over-parameterization, Eula parameters, and lessons from roboticists. Analyze the 2D Hopper, rare event simulation, traditional approaches, and failure cases. Discover insights on hopping robots, manipulation, planar grippers, robot simulators, occupation measures, convergence, and the relationship between language and state in linear models.

A Few Challenge Problems from Robotics

Simons Institute
Add to list
0:00 / 0:00