Главная
Study mode:
on
1
Introduction
2
Adversarial Examples
3
Why Care
4
What are Defenses
5
Adversarial Training
6
Thermometer Encoding
7
Input Transformation
8
Evaluating the robustness
9
Why are defenses easily broken
10
Lessons Learned
11
Adversary Training
12
Empty Set
13
Evaluating Adversely
14
Actionable Advice
15
Evaluation
16
Holding Out Data
17
FGSM
18
Gradient Descent
19
No Bounds
20
Random Classification
21
Negative Things
22
Evaluate Against the Worst Attack
23
Accuracy vs Distortion
24
Verification
25
Gradient Free
26
Random Noise
27
Conclusion
28
AES 1997
29
Attack success rates in insecurity
30
Why are we not yet crypto
31
How much we can prove
32
Still a lot of work to do
33
L2 Distortion
34
We dont know what we want
35
We dont have that today
36
Summary
37
Questions
Description:
Explore the challenges and lessons learned in evaluating defenses against adversarial examples in machine learning classifiers during this 48-minute USENIX Security '19 conference talk. Delve into common evaluation pitfalls, recommendations for thorough defense assessments, and comparisons between this emerging research field and established security evaluation practices. Gain insights from Research Scientist Nicholas Carlini of Google Research as he surveys the ways defenses have been broken and discusses the implications for future research. Learn about adversarial training, input transformations, and the importance of robust evaluation techniques in developing resilient machine learning models.

Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples

USENIX
Add to list
0:00 / 0:00