Crafting adversarial examples: fast gradient sign method
5
Threat model of a black-box attack
6
Our approach to black-box attacks
7
Adversarial example transferability
8
Intra-technique transferability: cross training data
9
Cross-technique transferability
10
Attacking remotely hosted black-box models
11
Results on real-world remote systems
12
Hands-on tutorial with the MNIST dataset
Description:
Explore the vulnerabilities of machine learning models to adversarial examples in this 20-minute conference talk from USENIX Enigma 2017. Delve into the world of subtly modified malicious inputs that can compromise the integrity of model outputs, potentially affecting various systems from vehicle control to spam detection. Learn about misclassification attacks on image, text, and malware classifiers, and discover how adversarial examples can transfer between different models. Gain practical knowledge through a hands-on tutorial on adversarial example crafting, covering algorithms, threat models, and proposed defenses. Join Nicolas Papernot, Google PhD Fellow at The Pennsylvania State University, as he guides you through the intricacies of this critical aspect of machine learning security.
Adversarial Examples in Machine Learning - Crafting and Defending Against Attacks