Explore Markov Decision Processes (MDPs) and decision-making under uncertainty in this comprehensive 49-minute video tutorial. Dive into the fundamentals of MDPs, including state space, action space, transition functions, reward functions, and discount factors. Learn about QuickPOMDPs and various MDP solvers, including reinforcement learning approaches. Follow along with a Pluto notebook to implement a Grid World environment, defining actions, transitions, rewards, and termination conditions. Discover offline solution methods like value iteration and policy visualization, as well as online approaches such as Q-learning, SARSA, and Monte Carlo Tree Search (MCTS). Gain practical insights through simulations and visualizations, and access additional resources and references to further your understanding of decision-making under uncertainty using POMDPs.jl in the Julia programming language.
MDPs - Markov Decision Processes - Decision Making Under Uncertainty Using POMDPs.jl