Главная
Study mode:
on
1
Intro
2
Virtual U
3
Online Photos
4
Face Modeling
5
Base Modeling
6
Texture Imputation
7
Correct for Eye Gaze
8
Experiments
9
Experimental Data
10
Results
11
Observations
12
Motion Detection Defense
13
First Experiment
14
Mitigations
15
Texture Detection
16
Conclusion
Description:
Explore a groundbreaking approach to bypassing modern face authentication systems in this 26-minute conference talk from USENIX Security '16. Discover how researchers from the University of North Carolina at Chapel Hill leverage public photos from social media to create realistic, textured 3D facial models that undermine widely used face authentication solutions. Learn about the innovative use of virtual reality (VR) systems to animate facial models, tricking liveness detectors into believing the 3D model is a real human face. Delve into the technical aspects of this VR-based spoofing attack, including base modeling, texture imputation, and eye gaze correction. Examine the experimental data and results that demonstrate the practical nature of this threat to camera-based authentication systems. Consider the implications of this new class of attacks and explore potential mitigations, including texture detection and motion-based defenses. Gain valuable insights into the vulnerabilities of face authentication technologies and the importance of incorporating verifiable data sources in security systems. Read more

Virtual U - Defeating Face Liveness Detection by Building Virtual Models from Your Public Photos

USENIX
Add to list
0:00 / 0:00