Главная
Study mode:
on
1
- Introduction
2
- What are hyperparameters
3
- Hyperparameter optimization loop
4
- Grid search
5
- Random search
6
- Bayesian optimization
7
- Install Python packages
8
- Import Python packages
9
- Configure Weights & Biases
10
- Set deterministic mode
11
- Load pendulum gymnasium environment
12
- Test pendulum environment
13
- Test random actions with dummy agent
14
- Testing and logging callbacks
15
- Define trial to train and test an agent
16
- Define project settings and hyperparameter ranges
17
- Create gymnasium environment
18
- Define Ax experiment to perform Bayesian optimization for hyperparameters
19
- Perform hyperparameter optimization and debugging
20
- Train agent with best hyperparameters
21
- Test agent
22
- Run additional trials
23
- Weights & Biases sweeps
Description:
Explore hyperparameter optimization for reinforcement learning using Meta's Ax framework in this comprehensive 58-minute tutorial. Learn about the three basic HPO techniques: grid search, random search, and Bayesian optimization. Dive into practical implementation using Python, including package installation, environment setup, and configuration of Weights & Biases. Follow along as the instructor demonstrates loading and testing a pendulum gymnasium environment, defining trials for agent training and testing, and setting up an Ax experiment for Bayesian optimization. Gain insights into debugging, training agents with optimized hyperparameters, and running additional trials. Conclude with an introduction to Weights & Biases sweeps for further optimization techniques.

Hyperparameter Optimization for Reinforcement Learning Using Meta's Ax

Digi-Key
Add to list
0:00 / 0:00