Главная
Study mode:
on
1
Intro
2
MDP definition
3
Grid World
4
State space
5
Action space
6
Transition function
7
Reward function
8
Discount factor
9
QuickPOMDPs
10
MDP solvers
11
RL solvers
12
Pluto notebook
13
Grid World environment
14
Grid World actions
15
Grid World transitions
16
Grid World rewards
17
Grid World discount
18
Grid World termination
19
Grid World MDP
20
Solutions (offline)
21
Value iteration
22
Transition probability distribution
23
Using the policy
24
Visualizations
25
Reinforcement learning
26
TD learning
27
Q-learning
28
SARSA
29
Solutions (online)
30
MCTS
31
MCTS visualization
32
Simulations
33
Extras
34
References
Description:
Explore Markov Decision Processes (MDPs) and decision-making under uncertainty in this comprehensive 49-minute video tutorial. Dive into the fundamentals of MDPs, including state space, action space, transition functions, reward functions, and discount factors. Learn about QuickPOMDPs and various MDP solvers, including reinforcement learning approaches. Follow along with a Pluto notebook to implement a Grid World environment, defining actions, transitions, rewards, and termination conditions. Discover offline solution methods like value iteration and policy visualization, as well as online approaches such as Q-learning, SARSA, and Monte Carlo Tree Search (MCTS). Gain practical insights through simulations and visualizations, and access additional resources and references to further your understanding of decision-making under uncertainty using POMDPs.jl in the Julia programming language.

MDPs - Markov Decision Processes - Decision Making Under Uncertainty Using POMDPs.jl

The Julia Programming Language
Add to list