What Are the Adversary's Capabilities? To generate attacks, attacker needs to know how changing input affects output
6
What's a (Deep) Neural Network?
7
Face Recognition . Applications: surveillance, access control...
8
Face Recognition: Our Attacks
9
Deep Face Recognition
10
Apply Changes to Face Only
11
Apply Changes to Eyeglasses
12
Experiments in Digital Environment
13
Smooth Transitions Natural images tend to be smooth
14
Printable Eyeglasses Chalenge: Cannot print all colors
15
Robust Perturbations
16
Putting All the Pieces Together - Physically realizable impersonation
17
Does This Work?
18
Experiment: Realized Impersonations
19
Impersonation Attacks Pose Real Risk!
20
Extensions (See Paper)
21
Conclusions
Description:
Explore a conference talk that delves into real and stealthy attacks on state-of-the-art face recognition systems. Learn about the vulnerabilities of machine learning in ubiquitous applications, particularly in face recognition used for surveillance and access control. Discover how adversaries can manipulate inputs to affect outputs in deep neural networks, and examine specific attack methods targeting facial features and eyeglasses. Investigate experiments conducted in digital environments and the challenges of creating physically realizable impersonations. Gain insights into the risks posed by impersonation attacks and their potential extensions. Understand the implications of these findings for the security of face recognition technology.
Accessorize to a Crime - Real and Stealthy Attacks on State-Of-The-Art Face Recognition