Главная
Study mode:
on
1
- Intro
2
- History of reinforcement learning
3
- Environment and agent interaction loop
4
- Gymnasium and Stable Baselines3
5
- Hands-on: how to set up a gymnasium environment
6
- Markov decision process
7
- Bellman equation for the state-value function
8
- Bellman equation for the action-value function
9
- Bellman optimality equations
10
- Exploration vs. exploitation
11
- Recommended textbook
12
- Model-based vs. model-free algorithms
13
- On-policy vs. off-policy algorithms
14
- Discrete vs. continuous action space
15
- Discrete vs. continuous observation space
16
- Overview of modern reinforcement learning algorithms
17
- Q-learning
18
- Deep Q-network DQN
19
- Hands-on: how to train a DQN agent
20
- Usefulness of reinforcement learning
21
- Challenge: inverted pendulum
22
- Conclusion
Description:
Dive into the world of Reinforcement Learning (RL) with this comprehensive video tutorial. Explore the fundamental theory behind RL and learn how to implement it using Farama Foundation Gymnasium and Stable Baselines3 in Python. Follow along as the instructor demonstrates training an AI agent to solve the classic cartpole control theory problem. Gain insights into the RL process, including environment-agent interactions, Markov decision processes, and Bellman equations. Discover the differences between model-based and model-free algorithms, on-policy and off-policy approaches, and discrete vs. continuous action and observation spaces. Get hands-on experience setting up a Gymnasium environment and training a Deep Q-Network (DQN) agent. Conclude with a challenge to apply your newfound knowledge to the inverted pendulum problem, and explore additional resources for further learning in this exciting field of machine learning.

Introduction to Reinforcement Learning

Digi-Key
Add to list