Главная
Study mode:
on
1
Introduction
2
James Webb Space Telescope
3
Robot Learning Workflow
4
Complex vs Reliable
5
Abstraction and Composition
6
System Perspective
7
Compositional Robot Autonomy Stack
8
Neural Task Programming
9
Robotic Grasping
10
Characterization of Objects
11
GIGA
12
Neural Fields
13
Supervised Procedure
14
Real Reward Experiments
15
Body Interaction
16
Physical Interaction
17
Concrete Approach
18
Interactive Digital Training
19
Questions
20
First Bus
21
Work is First
22
Conclusion
23
Classroom
24
Context Principle
25
Maple
26
Grasping
27
Action Space
28
Atomic Primitives
29
Task Sketch
30
Conclusions
31
What we learned
32
Skill
33
AI Architecture
34
New Frontier
35
Questions and Answers
Description:
Explore the future of robot autonomy in this Stanford seminar featuring Yuke Zhu from UT Austin. Delve into the integration of deep learning advances with engineering principles to create scalable autonomous systems. Learn about state-action abstractions and their role in developing a compositional autonomy stack. Discover GIGA and Ditto for learning actionable object representations, and BUDS and MAPLE for scaffolding long-horizon tasks with sensorimotor skills. Gain insights into the challenges of generalization and robustness in robot learning algorithms, and explore potential solutions for widespread deployment. Engage with discussions on future research directions aimed at building scalable robot autonomy, including topics such as the James Webb Space Telescope, neural task programming, robotic grasping, and interactive digital training.

Stanford Seminar - Objects, Skills, and the Quest for Compositional Robot Autonomy

Stanford University
Add to list
0:00 / 0:00