Главная
Study mode:
on
1
THE ADVANCED COMPUTING SYSTEMS ASSOCIATION
2
Do models leak training data?
3
Act I: Extracting Training Data
4
A New Attack: : Training Data Extraction
5
1. Generate a lot of data 2. Predict membership
6
Evaluation
7
Up to 5% of the output of language models is verbatim copied from the training dataset
8
Case study: GPT-2
9
Act II: Ad-hoc privacy isn't
10
Act III: Whatever can we do?
11
3. Use differential privacy
12
Questions?
Description:
Explore the critical privacy concerns in machine learning models through this 23-minute conference talk from USENIX Enigma 2022. Delve into Nicholas Carlini's research at Google, uncovering how current models can leak personally-identifiable information from training datasets. Examine the case study of GPT-2, where up to 5% of output is directly copied from training data. Learn about the challenges in preventing data leakage, the ineffectiveness of ad-hoc privacy solutions, and the trade-offs of using differentially private gradient descent. Gain insights into potential research directions and practical techniques for testing model memorization, equipping both researchers and practitioners with valuable knowledge to address this pressing issue in the field of machine learning.

When Machine Learning Isn't Private

USENIX Enigma Conference
Add to list
0:00 / 0:00