Higher Order Bias/Fairness, Physical Safety & Reliability concerns stem from unmitigated Security and Privacy Threats
4
Adversarial Audio Examples
5
Failure Modes in Machine Learning
6
Adversarial Attack Classification
7
Data Poisoning: Attacking Model Availability
8
Data Poisoning: Attacking Model Integrity
9
Poisoning Model Integrity: Attack Example
10
Proactive Defenses
11
Threat Taxonomy
12
Adversarial Goals
13
A Race Between Attacks and Defenses
14
Ideal Provable Defense
15
Build upon the Details: Security Best Practices
16
Define lower/upper bounds of data input and output
17
Threat Modeling Al/ML Systems and Dependencies
18
Wrapping Up
19
AI/ML Pivots to the SDL Bug Bar
Description:
Explore the critical landscape of AI security engineering in this 54-minute RSA Conference talk. Delve into the modeling, detection, and mitigation of new vulnerabilities in AI and machine learning systems. Learn about customer compromise through adversarial machine learning, higher-order bias and fairness concerns, and physical safety and reliability issues stemming from unmitigated security and privacy threats. Examine adversarial audio examples, failure modes in machine learning, and various adversarial attack classifications. Investigate data poisoning attacks on model availability and integrity, and discover proactive defense strategies. Gain insights into threat taxonomy, adversarial goals, and the ongoing race between attacks and defenses. Understand the concept of ideal provable defense and explore security best practices, including defining input/output bounds and threat modeling AI/ML systems. Conclude with an overview of AI/ML pivots to the Security Development Lifecycle (SDL) Bug Bar, equipping you with essential knowledge to protect and defend AI services against emerging threats.
Read more
AI Security Engineering - Modeling - Detecting - Mitigating New Vulnerabilities