Explore the challenges and lessons learned in evaluating defenses against adversarial examples in machine learning classifiers during this 48-minute USENIX Security '19 conference talk. Delve into common evaluation pitfalls, recommendations for thorough defense assessments, and comparisons between this emerging research field and established security evaluation practices. Gain insights from Research Scientist Nicholas Carlini of Google Research as he surveys the ways defenses have been broken and discusses the implications for future research. Learn about adversarial training, input transformations, and the importance of robust evaluation techniques in developing resilient machine learning models.
Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples