Главная
Study mode:
on
1
Intro
2
(Non)-convex learning
3
Differential Privacy
4
Cross-device federated learning
5
Differentially private stochastic gradient descent DP
6
DP-SGD: Key insights
7
DP-Federated Averaging (DP-FedAvg)
8
Challenges for Amplification by Sampling in FL
9
Deconstructing the SGD model update
10
Noise Accumulation in Prefix Sums
11
Towards Tree Aggregation
12
Interlude: Follow-the-regularized-leader (FTRL)
13
DP-Follow-the-regularized leader (DP-FTRL)
14
DP-FTRL: Online learning properties
15
Privacy-Utility Trade-offs for Stackoverflow
16
Production model with formal DP
17
Matrix factorization view of prefix sum estimation
18
Matrix factorization view of DP prefix sum
19
Future directions
20
Acknowledgements
Description:
Explore federated learning with formal user-level differential privacy guarantees in this 59-minute invited talk from PPML 2022. Delve into topics such as non-convex learning, cross-device federated learning, differentially private stochastic gradient descent, and DP-Federated Averaging. Examine challenges in amplification by sampling, noise accumulation in prefix sums, and tree aggregation. Investigate DP-Follow-the-regularized leader (DP-FTRL) and its online learning properties. Analyze privacy-utility trade-offs using Stackoverflow as an example, and discover a production model with formal differential privacy. Gain insights into matrix factorization views of prefix sum estimation and DP prefix sum. Conclude with future directions and acknowledgements in this comprehensive exploration of privacy-preserving machine learning techniques.

Federated Learning with Formal User-Level Differential Privacy Guarantees

TheIACR
Add to list