Главная
Study mode:
on
1
Introduction
2
Machine Learning
3
Outline
4
Hyperparameter Tuning Global vs Local
5
Hyperparameter Tuning Methods
6
Baseline Challenges
7
Success of Having
8
Issues
9
Resource Limitations
10
Local vs Global Validation
11
Baselines vs Bayesian Optimization
12
Neural Architecture Search
13
Architecture Search
14
Weight Sharing
15
Constraints
16
Federated Learning
17
Federated Averaging
18
Local Hyperparameters
19
Local Hyperparameter Tuning
20
Summary
21
Solution
22
Methods
23
Results
24
Key takeaways
25
Questions
Description:
Explore the challenges, baselines, and connections to weight-sharing in federated hyperparameter tuning through this comprehensive conference talk. Delve into the complexities of tuning hyperparameters in federated learning environments, where models are trained across distributed networks of heterogeneous devices. Learn about key challenges in federated hyperparameter optimization and discover how standard approaches can be adapted to form baselines. Gain insights into a novel method called FedEx, which accelerates federated hyperparameter tuning by connecting to neural architecture search techniques. Examine theoretical foundations and empirical results demonstrating FedEx's superior performance on benchmarks like Shakespeare, FEMNIST, and CIFAR-10. Understand the importance of efficient hyperparameter tuning in federated learning and its impact on model accuracy and training budgets.

Federated Hyperparameter Tuning - Challenges, Baselines, and Connections

Stanford University
Add to list
0:00 / 0:00