Главная
Study mode:
on
1
Intro
2
LET'S TALK ABOUT REINFORCEMENT LEARNING
3
THE THREE MACHINE LEARNS
4
EMBODIED LEARNING
5
AGENT-BASED LEARNING
6
THE DECISION POLICY
7
THE REWARD
8
TWO IDEAS
9
DEALING WITH UNCERTAINTY
10
REQUIREMENTS OF BIG SUCCESSES
11
SIMULATION
12
FULLY OBSERVABLE
13
TRANSFERABILITY OF METHOD
14
WHAT IS THE COST OF AN ERROR?
15
CAN WE APPLY THIS TO REAL PROBLEMS?
16
REAL-WORLD ALTERNATIVES
17
WHAT ARE WE TRYING TO SOLVE
18
TOOLS
19
MICROSOFT AZURE
20
AWS SAGEMAKER
21
WHEN SHOULD I USE CONTEXTUAL BANDITS?
22
LIMITATIONS
23
BEHAVIORAL CLONING
24
EXPERT SYSTEMS SUPERVISED LEARNING
25
COLLECT TRAJECTORIES FROM AN EXPERT
26
BREAK UP INTO STATE / ACTION PAIRS
27
TRAIN A MODEL ON THE TRAJECTORIES
28
INTERACTIVE EXPERTS
29
APPLICATIONS
30
WHEN SHOULD I USE IMITATION LEARNING?
31
SCALABILITY CONCERNS
32
CAPTURING DATASETS
33
IMITATION LEARNING + REINFORCEMENT LEARNING
34
RESOURCES
35
OFFLINE RL
36
WHY IS THIS EXCITING?
Description:
Explore alternatives to Reinforcement Learning for real-world problems in this 42-minute conference talk from Open Data Science. Delve into the limitations of Reinforcement Learning in practical applications, focusing on the challenges of simulation and full observability. Discover two related approaches for agent-based learning: Contextual Bandits and Imitation Learning. Learn how these methods simplify the full Reinforcement Learning problem, their formal definitions, differences, limitations, and real-world applications. Gain insights into tools like Microsoft Azure and AWS SageMaker, and understand when to use each approach. Examine concepts such as behavioral cloning, expert systems, and interactive experts. Explore the scalability concerns, data capture methods, and the exciting potential of combining Imitation Learning with Reinforcement Learning. Conclude with a discussion on Offline RL and its significance in addressing real-world challenges.

Alternatives to Reinforcement Learning for Real-World Problems

Open Data Science
Add to list