Главная
Study mode:
on
1
USENIX Security '24 - Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Watch a research presentation from USENIX Security '24 exploring how differentially private stochastic gradient descent (DP-SGD) provides stronger privacy guarantees than previously thought for many datapoints in common benchmark datasets. Learn about a novel per-instance privacy analysis that demonstrates how points with similar neighbors in datasets enjoy better data-dependent privacy protection compared to outliers. Discover how researchers developed a new composition theorem to analyze entire training runs, formally proving that DP-SGD leaks significantly less information than indicated by current data-independent guarantees when training on standard benchmarks. Understand the implications for privacy attacks and how they may fail against many datapoints without sufficient adversarial control over training datasets.

Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD

USENIX
Add to list