Главная
Study mode:
on
1
Intro
2
Sequential Decision Making in Complex Environments
3
Data from the Existing Simulation Environments
4
Procedural Generation of New Environments
5
Benchmarking RL Generalization
6
Benchmarking Safe Reinforcement Learning
7
Benchmarking Multi-Agent Reinforcement Learning
8
Real2Sim: Learning to generate traffic scenarios
9
Pretraining Policy Representation with Real World Data
10
Self-supervised Learning through Contrastive Learning
11
Policy Pretraining with Human Actions
12
Action-conditioned Contrastive Learning
13
Pretrained Representation for Imitation Learning
14
Human-in-the-loop Reinforcement Learning
15
Human-Al Copilot Optimization (HACO)
16
Demo Video: Learning to drive in CARLA environment
17
Policy Dissection through Frequency Analysis
Description:
Explore recent advancements in machine autonomy and generalizable embodied AI through this remote talk given at Stanford Graphics Group. Delve into topics such as sequential decision making in complex environments, procedural generation of new environments, and benchmarking reinforcement learning generalization. Learn about innovative approaches like Real2Sim for generating traffic scenarios, policy pretraining with real-world data, and self-supervised learning through contrastive methods. Discover the potential of human-in-the-loop reinforcement learning and human-AI copilot optimization. Gain insights into policy dissection through frequency analysis and witness a demo of learning to drive in the CARLA environment.

Toward Generalizable Embodied AI for Machine Autonomy

Bolei Zhou
Add to list