Главная
Study mode:
on
1
Introduction
2
Meet Andrew
3
Deep Learning Applications
4
Adversarial Learning
5
Deanonymization
6
Tay
7
Simon Wecker
8
What is an adversarial attack
9
Examples of adversarial attacks
10
Why adversarial attacks exist
11
Accuracy
12
Accuracy Robustness
13
Adversarial Attacks
14
Adversarial Defense
15
Certified Robustness
16
Differential Privacy
17
Differential Privacy Equation
18
Other Methods
19
Example
20
Polytope Bounding
21
Test Time Samples
22
Training Time Attacks
23
Conclusion
Description:
Explore the critical topic of securing neural networks against adversarial attacks in this 49-minute seminar presented by Dr. Andrew Cullen, Research Fellow in Adversarial Machine Learning at the University of Melbourne. Delve into the vulnerabilities of machine learning systems to adversarial attacks and learn how these attacks can manipulate model outputs in ways that wouldn't affect human decision-making. Gain insights into various adversarial attacks and defense strategies across different domains, and understand how to incorporate adversarial behavior considerations into research and development work. Cover key concepts such as deep learning applications, deanonymization, accuracy vs. robustness, certified robustness, differential privacy, and training time attacks. Discover practical examples and methods like polytope bounding and test time samples to enhance the security of neural networks.

Getting Robust - Securing Neural Networks Against Adversarial Attacks

University of Melbourne
Add to list
0:00 / 0:00