Главная
Study mode:
on
1
Intro
2
Reinforcement Learning (RL) Applications
3
Value-function Approximation
4
Comparison between SL and RL
5
Markov Decision Process (MDP)
6
Batch learning in MDPS
7
Example: Video game playing
8
Batch learning in large MDPS
9
Assumption on data (?)
10
Assumption on data & MDP dynamics
11
Algorithm for batch RL
12
How things go wrong (w/ restricted class)
13
Fix using a strong assumption ("completeness")
14
Realizability alone is insufficient?
15
Proving the conjecture: Attempt 1
16
Checklist for a plausible construction
17
Importance of the conjecture
18
Importance of the construction
Description:
Explore the complexities of Reinforcement Learning with value-function approximation in this 54-minute lecture by Nan Jiang from the University of Illinois Urbana-Champaign. Delve into the applications of RL, compare it with Supervised Learning, and understand the intricacies of Markov Decision Processes. Examine batch learning in MDPs, including examples from video game playing, and analyze the assumptions on data and MDP dynamics. Investigate algorithms for batch RL and learn how things can go wrong with restricted classes. Discover the importance of strong assumptions like "completeness" and why realizability alone may be insufficient. Follow attempts to prove a key conjecture and grasp its significance in the field. This talk, part of the "Emerging Challenges in Deep Learning" series at the Simons Institute, offers valuable insights into the hardness of RL with value-function approximation.

On the Hardness of Reinforcement Learning With Value-Function Approximation

Simons Institute
Add to list
0:00 / 0:00