Explore the critical topic of securing neural networks against adversarial attacks in this 49-minute seminar presented by Dr. Andrew Cullen, Research Fellow in Adversarial Machine Learning at the University of Melbourne. Delve into the vulnerabilities of machine learning systems to adversarial attacks and learn how these attacks can manipulate model outputs in ways that wouldn't affect human decision-making. Gain insights into various adversarial attacks and defense strategies across different domains, and understand how to incorporate adversarial behavior considerations into research and development work. Cover key concepts such as deep learning applications, deanonymization, accuracy vs. robustness, certified robustness, differential privacy, and training time attacks. Discover practical examples and methods like polytope bounding and test time samples to enhance the security of neural networks.
Getting Robust - Securing Neural Networks Against Adversarial Attacks