Главная
Study mode:
on
1
Intro
2
Summary: Terminology (cont.) Targeted Attack Metrics
3
Existing Attacks
4
Fast Gradient Sign (FGS)
5
Jacobian-based Saliency Map Attack (SMA)
6
New approach
7
Objective Functions Explored
8
Dealing with Box Constraints: x+8 € [0, 1]
9
Finding Best Combination
10
Different Attacks (Cont.)
11
Attack Evaluation
12
Attacks on ImageNet
13
Defensive Distillation
Description:
Explore the robustness of neural networks in this 20-minute lecture from the University of Central Florida. Delve into targeted attack metrics, existing attacks like Fast Gradient Sign and Jacobian-based Saliency Map Attack, and new approaches to evaluating neural network vulnerability. Examine objective functions, box constraints, and methods for finding the best combination of attacks. Learn about attack evaluation techniques and their application to ImageNet datasets. Conclude with an introduction to defensive distillation as a potential countermeasure against adversarial attacks.

Evaluating Neural Network Robustness - Targeted Attacks and Defenses

University of Central Florida
Add to list
0:00 / 0:00