Главная
Study mode:
on
1
Intro
2
Deep learning Tasks
3
Privacy Threats
4
Membership Inference
5
Training a Model
6
Gradients Leak Information
7
Different Learning/Attack Settings
8
Active Attack on Federated Learning
9
Active Attacks in Federated Model
10
Fully Trained Model
11
Central Attacker in Federated Model
12
Local Attacker in Federated Learning
13
Score function
14
Experimental Setup
15
Pretrained Models Attacks
16
Federated Attacks
17
Conclusions
Description:
Explore a comprehensive privacy analysis of deep learning in this 17-minute IEEE conference talk. Delve into the susceptibility of deep neural networks to inference attacks and examine white-box inference techniques for both centralized and federated learning models. Discover novel membership inference attacks that exploit vulnerabilities in stochastic gradient descent algorithms. Investigate why deep learning models may leak training data information and learn how even well-generalized models can be vulnerable to white-box attacks. Analyze privacy risks in federated learning settings, including active membership inference attacks by adversarial participants. Gain insights into experimental setups, attacks on pretrained models, and the implications for privacy in deep learning systems.

Comprehensive Privacy Analysis of Deep Learning

IEEE
Add to list
0:00 / 0:00