Главная
Study mode:
on
1
Introduction
2
Welcome
3
Reinforcement Learning
4
Nash Equilibrium
5
fictitious play
6
multiagent learning
7
literature review
8
Motivation
9
Outline
10
Stochastic Game
11
Optimality
12
Top Game Theory
13
Mathematical Dynamics
14
Learning Rates
15
Convergence Analysis
16
Differential Inclusion Approximation
17
Lyapunov Function
18
Harriss Lyapunov Function
19
Zero Sum Case
20
Zero Potential Case
21
Convergence
22
Monotonicity
23
ModelFree
24
Individual Q Learning
Description:
Explore a 46-minute lecture on independent learning dynamics for stochastic games in multi-agent reinforcement learning. Delve into the challenges of applying classical reinforcement learning to multi-agent scenarios and discover recently proposed independent learning dynamics that guarantee convergence in stochastic games. Examine both zero-sum and single-controller identical-interest settings, while revisiting key concepts from game theory and reinforcement learning. Learn about the mathematical novelties in analyzing these dynamics, including differential inclusion approximation and Lyapunov functions. Gain insights into topics such as Nash equilibrium, fictitious play, and model-free individual Q-learning, all within the context of dynamic multi-agent environments.

Independent Learning Dynamics for Stochastic Games - Where Game Theory Meets

International Mathematical Union
Add to list