Главная
Study mode:
on
1
Intro
2
Privacy and Learning
3
Privacy Preserving Learning
4
Stochastic Optimization
5
Private Stochastic Convex Optimization
6
Typical Strategy 1: DP-SGD
7
Typical Strategy 2: Bespoke Analysis
8
Two techniques summary
9
High-level result
10
Outline of Strategy
11
Key Ingredient 1: Online (Linear) Optimization/Learning
12
Online Linear Optimization
13
Key Ingredient 2: Online to Batch Conversion
14
Straw man algorithm: Gaussian mechanism + online-to-batch
15
Key Ingredient 2: Anytime Online-to-Batch Conversion
16
Important Property of Anytime Online-to-Batch
17
Anytime vs Classic Sensitivity
18
Gradient as sum of gradient differences
19
Our actual strategy
20
Final Ingredient: Tree Aggregation
21
Final Algorithm
22
Loose Ends
23
Unpacking the bound
24
Applications: Adaptivity
25
Applications: Parameter-free/Comparator Adaptive
26
Fine Print, Open problems
Description:
Explore a Google TechTalk presented by Ashok Cutkosky on differentially private online to batch conversion in stochastic optimization. Delve into the challenges of privacy-preserving algorithms and learn about a novel approach that bridges the gap between simple but suboptimal methods and complex but theoretically optimal techniques. Discover how this new variation on the classical online-to-batch conversion can transform any online optimization algorithm into a private stochastic optimization algorithm, potentially achieving optimal convergence rates. Throughout the 50-minute talk, examine key concepts such as DP-SGD, bespoke analysis, online linear optimization, and tree aggregation. Gain insights into the applications of this method, including adaptivity and parameter-free comparator adaptation, while considering the implications for privacy in machine learning and optimization practices.

Differentially Private Online-to-Batch Conversion for Stochastic Optimization

Google TechTalks
Add to list
0:00 / 0:00