Главная
Study mode:
on
1
Intro
2
Adversarial attacks on deep learning
3
Why should we care?
4
Adversarial robustness
5
How to we strictly upper bound the maximization?
6
This talk
7
What causes adversarial examples?
8
Randomization as a defense?
9
Visual intuition of randomized smoothing
10
The randomized smoothing guarantee
11
Proof of certified robustness (cont)
12
Caveats (a.k.a. the fine print)
13
Comparison to previous SOTA on CIFAR10
14
Performance on ImageNet
Description:
Explore the frontiers of deep learning in this 48-minute lecture by Zico Kolter from Carnegie Mellon University. Delve into the critical topic of provable robustness in deep learning systems, moving beyond traditional bound propagation techniques. Gain insights into adversarial attacks, their significance, and the concept of adversarial robustness. Examine the causes of adversarial examples and evaluate randomization as a potential defense mechanism. Discover the visual intuition behind randomized smoothing and understand its guarantees. Follow the proof of certified robustness, while considering important caveats. Compare the presented approach with previous state-of-the-art methods on CIFAR10 and assess its performance on ImageNet. Enhance your understanding of advanced deep learning concepts and their practical implications in this comprehensive talk from the Simons Institute's "Frontiers of Deep Learning" series.

Provable Robustness Beyond Bound Propagation

Simons Institute
Add to list
0:00 / 0:00