Application: Model-Inversion Attacks Infer training data from trained models Fredrikson et al.- 2015
9
Extracting a Decision Tree
10
Countermeasures
11
Brief Announcement
12
Conclusion
13
Generic Model Retraining Attacks
Description:
Explore a 28-minute conference talk from USENIX Security '16 that delves into the vulnerabilities of machine learning models deployed with public query interfaces. Learn about model extraction attacks where adversaries aim to duplicate confidential ML models using only black-box access. Discover simple yet efficient techniques for extracting logistic regression, neural network, and decision tree models with near-perfect fidelity. Examine real-world demonstrations against BigML and Amazon Machine Learning services. Investigate potential countermeasures and their limitations, including the impact of omitting confidence values from model outputs. Gain insights into the broader implications for ML model deployment and the need for robust protection strategies in the growing field of ML-as-a-service.
Stealing Machine Learning Models via Prediction APIs