Главная
Study mode:
on
1
Introduction
2
Autonomous Driving
3
Changing Lanes
4
Coordination Tasks
5
Understanding the Action Space
6
Nonstationary Agents
7
Nonstationarity
8
Reducing Nonstationarity
9
Representation Learning
10
Merrill
11
Stable
12
Lilly
13
Summary
14
Modular Architecture
15
Collaborative Multiarm Bandits
16
Robotics Example
17
Thank you
18
Humans are smarter than robots
19
Collaborative tasks
20
Intrinsic motivation
Description:
Explore the challenges and lessons learned in partner modeling for decentralized multi-agent coordination in this 55-minute lecture by Dorsa Sadigh from Stanford University. Delve into the role of representation learning in developing effective conventions and latent partner strategies, and discover how to leverage these conventions within reinforcement learning loops for coordination, collaboration, and influencing. Examine strategies for stabilizing latent partner representations to reduce non-stationarity and achieve more desirable learning outcomes. Investigate the formalization of decentralized multi-agent coordination as a collaborative multi-armed bandit with partial observability, and learn how partner modeling strategies can achieve logarithmic regret. Gain insights into autonomous driving, coordination tasks, and collaborative robotics while exploring topics such as intrinsic motivation and the superiority of human intelligence in complex scenarios.

The Role of Conventions in Adaptive Human-AI Interaction

Simons Institute
Add to list
0:00 / 0:00