Explore membership inference attacks against machine learning models in this IEEE Symposium on Security & Privacy conference talk. Delve into how individual data records used for training can be leaked by models, focusing on the basic membership inference attack. Learn to determine if a specific record was part of a model's training dataset using only black-box access. Discover techniques for training adversarial inference models to recognize differences in predictions on training versus non-training inputs. Examine empirical evaluations of these inference techniques on classification models from commercial "machine learning as a service" providers. Investigate factors influencing data leakage and evaluate mitigation strategies using realistic datasets, including a sensitive hospital discharge dataset. Gain insights into machine learning privacy, summary statistics exploitation, shadow models, and constructing effective attack models.
Membership Inference Attacks against Machine Learning Models