Главная
Study mode:
on
1
Introduction
2
Research Goals
3
What are Deep Generative Models
4
Applications of Deep Generative Models
5
What Could Go Wrong
6
What Makes a Successful Attack
7
Model Inspection
8
Brute Force Sampling
9
mnist
10
Training
11
Can you do something better
12
Approach trail
13
Approach schematic
14
Student Model
15
Redundancy
16
Token Example
17
Stylegun
18
Attack Goals
19
Basic Defenses
20
Whitebox Access
Description:
Explore the vulnerabilities of Deep Generative Models (DGMs) and Generative Adversarial Networks (GANs) in this 39-minute Black Hat conference talk. Delve into a formal threat model for training-time attacks against DGMs, uncovering how attackers can backdoor pre-trained models and embed compromising data points. Learn about the potential material and reputational damage these attacks can cause to organizations using DGMs. Examine naïve detection mechanisms and discover effective combinations of static and dynamic inspections to detect these attacks. Gain insights into research goals, applications of DGMs, successful attack characteristics, model inspection techniques, and basic defense strategies. Presented by Killian Levacher, Ambrish Rawat, and Mathieu Sinn, this talk covers topics such as brute force sampling, student models, redundancy, and whitebox access, providing a comprehensive overview of the challenges and solutions in defending DGMs against adversarial attacks.

The Devil is in the GAN - Defending Deep Generative Models Against Adversarial Attacks

Black Hat
Add to list
0:00 / 0:00