Главная
Study mode:
on
1
Introduction
2
Finetuning
3
Changing the Loss Function
4
Representation Learning
5
Multitask Linear Regression
6
Results
7
WT
8
Diversity
9
Federated Learning
10
Federated Learning Performance
11
Federated Learning vs Distributed SGD
12
Multitasking Linear Regression
13
Model Drift
14
Distributed Gradient Descent
15
The Goal of a Model
16
Fine Tuning
17
Regularization
Description:
Learn about machine learning model adaptability and representation learning in this technical lecture from Prof. Sanjay Shakkottai of The University of Texas at Austin. Explore how models can be effectively trained using data from multiple clients/environments for deployment in new, unseen environments. Dive into two key approaches: Model Adaptive Meta Learning (MAML) and federated learning with FedAvg, examining their theoretical foundations in multi-task linear representation settings. Understand how the bi-level update structure in both approaches leverages client data diversity to achieve optimal representation learning. Follow along as the lecture demonstrates exponential convergence to ground-truth representation and discusses practical applications in wireless communication networks and online platforms. Gain insights into fine-tuning strategies, regularization techniques, model drift considerations, and the fundamental goals of adaptive learning models.

The Power of Adaptivity in Representation Learning - From Meta-Learning to Federated Learning

Centre for Networked Intelligence, IISc
Add to list
0:00 / 0:00