Explore the challenges and limitations of differentially private machine learning in this 15-minute IEEE presentation. Delve into the concept of adversary instantiation and its implications for establishing lower bounds in privacy-preserving ML algorithms. Learn about the non-private nature of traditional machine learning, the integration of differential privacy, and the importance of calculating epsilon. Focus on Differentially Private Stochastic Gradient Descent (DPSGD) and examine key topics such as membership inference, worst-case scenarios, intermediate model access, and adaptive distinguishers. Gain insights into gradient poisoning attacks and their impact on privacy guarantees in machine learning systems.
Adversary Instantiation - Lower Bounds for Differentially Private Machine Learning