Главная
Study mode:
on
1
Introduction
2
The Future of Machine Learning
3
Bandits
4
Drug Makers
5
Google Maps
6
Content Recommendation
7
Stochastic Model
8
Thompson
9
Regret minimization
10
Regret
11
Sublinear Regret
12
Sub Gaussian
13
Central Limit Theorem
Description:
Explore the fundamentals of bandit algorithms in this comprehensive lecture from the University of Washington. Delve into the future of machine learning and discover how bandit algorithms are applied in various real-world scenarios, including drug development, Google Maps optimization, and content recommendation systems. Learn about stochastic models, Thompson sampling, and regret minimization techniques. Gain insights into key concepts such as sublinear regret, sub-Gaussian distributions, and the Central Limit Theorem. Enhance your understanding of this crucial area of machine learning and its practical applications in decision-making processes.

Bandits - Kevin Jamieson - University of Washington

Paul G. Allen School
Add to list