Главная
Study mode:
on
1
Introduction
2
Stochastic Gradient Descent SGD
3
Application Next Word Prediction
4
Federated Learning
5
Local Objective Functions
6
Basic Algorithm
7
Sources of Heterogeneity
8
Why is this a problem
9
Quantifying Heterogeneity
10
Open Question 1
11
Open Question 2
12
Communication heterogeneity
13
Client selection
14
Example
15
Power of Choice Selection
16
Summary
17
Questions
18
Local Adaptive Optimization
19
Key takeaway
20
Other interesting directions
Description:
Learn about federated learning and distributed optimization in this technical lecture from Carnegie Mellon University Associate Professor Gauri Joshi. Explore the challenges and solutions for implementing machine learning at the edge, where data collection and model training occur on resource-constrained mobile devices. Dive into the complexities of heterogeneity in federated learning systems, examining how variations in data, communication, and computation across edge clients impact system performance. Discover recent algorithmic developments designed to address these heterogeneity challenges, including approaches to client selection and local adaptive optimization. Follow along as Prof. Joshi, an MIT Technology Review 35 Innovators under 35 recipient and NSF CAREER Award winner, breaks down key concepts from stochastic gradient descent to next word prediction applications, while addressing critical questions about scalability and flexibility in federated optimization systems.

Heterogeneity-Aware Algorithms for Federated Learning and Distributed Optimization

Centre for Networked Intelligence, IISc
Add to list
0:00 / 0:00