Главная
Study mode:
on
1
Introduction
2
Adversarial Examples
3
Projected Gradient Descent
4
Fast Gradient Sign
5
Universal Perturbations
6
Blackbox Attacks
7
Stochastic Coordinate Descent
8
Ensemble Approach
9
Adversarial Patches
10
Challenges
11
Physical World Attacks
12
Adversarial Training
Description:
Explore adversarial examples for deep neural networks in this comprehensive lecture. Delve into white box attacks, black box attacks, real-world attacks, and adversarial training. Learn about Projected Gradient Descent, Fast Gradient Sign Method, Carlini-Wagner methods, Universal Adversarial Perturbations, Adversarial Patches, Transferability Attacks, and Zeroth Order Optimization. Examine challenges in physical world attacks and the concept of adversarial training. Access accompanying lecture notes for further study and explore referenced research papers to deepen understanding of this critical aspect of deep learning security.

Adversarial Examples for Deep Neural Networks

Paul Hand
Add to list