Главная
Study mode:
on
1
Intro
2
Adversaries Don't Cooperate
3
Focus: Evasion Attacks
4
PDF Malware Classifiers
5
Random Forest
6
Automated Classifier Evasion Using Genetic Programming
7
Goal: Find Evasive Variant
8
Start with Malicious Seed
9
Generating Variants
10
Selecting Promising Variants
11
Oracle
12
Fitness Function
13
Classifier Performance
14
Execution Cost
15
Retraining Classifier
16
Hide Classifier "Security Through Obscurity"
17
Cross-Evasion Effects
18
Evading Gmail's Classifier
19
Conclusion
Description:
Explore the vulnerabilities of machine learning classifiers in security applications through this 20-minute conference talk from USENIX Enigma 2017. Delve into the reasons why classifiers, despite performing well in testing, can be easily thwarted by motivated adversaries in real-world scenarios. Examine how attackers construct evasive variants that are misclassified as benign, and understand the inherent fragility of many machine learning techniques, including deep neural networks. Learn about successful evasion techniques, including automated methods, and discover potential strategies to enhance classifier robustness against adversarial attacks. Gain insights into evaluating the resilience of deployed classifiers in adversarial environments, and understand the implications for the future of machine learning in security applications.

Classifiers Under Attack: Evasion Techniques and Defensive Strategies

USENIX Enigma Conference
Add to list
0:00 / 0:00