Hardware Attacks Can Break Mathematically-Proven Guarantees
6
(Weak) Hardware Attacks Can Be Exploited in the Cloud
7
Prior Work's Perspective on a Model's Robustness
8
The Worst-Case Perturbation
9
Threat Model - Single-Bit Adversaries
10
Evaluate the Weakest Attacker with Multiple Bit-flips
11
Our Attack: Reconstruction of DNN Architectures from the Trace
12
We Can Identify the Layers Accessed While Computing
13
Solution: Generate All Candidate Architectures
14
Solution: Eliminate incompatible Candidates
Description:
Explore practical hardware attacks on deep learning systems in this USENIX Enigma Conference talk. Delve into the vulnerabilities of machine learning models running on hardware, examining fault-injection and side-channel attacks. Learn how flipping a single bit in a deep neural network's memory representation can drastically degrade prediction accuracy, and discover how cache side-channel attacks can reverse-engineer proprietary DNN architecture details. Gain insights into the under-studied topic of ML vulnerability to hardware attacks, and understand the need for additional ML-level defenses that account for robust properties. Consider the implications of these findings on the security of machine learning systems and the importance of addressing both the "soundness of mind" and the "vulnerable body" in ML security research.
A Sound Mind in a Vulnerable Body - Practical Hardware Attacks on Deep Learning