Главная
Study mode:
on
1
Intro
2
How do we generate adversarial examples?
3
Threat Models
4
A threat model is a formal statement defining when a system is intended to be secure.
5
This talk: non-certified defenses
6
For example: adversarial training
7
How complete are evaluations?
8
Case Study: ICLR 2018
9
Broken Defenses Correct Defenses
10
Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples
11
Disentangling true robustness from apparent robustness is nontrivial
12
Lessons (2 of 2) performing better evaluations
13
To understand adversarial examples, repeatedly attack and defend, optimizing for lessons learned.
Description:
Explore the challenges and insights in evaluating defenses against adversarial examples in deep learning systems through this 46-minute talk by Nicholas Carlini from Google Brain. Delve into threat models, non-certified defenses, and case studies from ICLR 2018. Learn how to distinguish true robustness from apparent robustness and gain valuable lessons for conducting better evaluations. Understand the iterative process of attacking and defending to optimize learning in the field of adversarial examples.

Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples

Simons Institute
Add to list
0:00 / 0:00