Главная
Study mode:
on
1
Intro
2
Typical Setup at Home
3
Physical Real-World Attacks
4
Splicing Demo 1
5
Splicing Demo 2
6
Future Attacks 1
7
Future Attacks 2
8
Attacking Al Assistant Business Logic
9
Architecture
10
Understanding Slots
11
Attackable Slots
12
Neural Networks and the Brain
13
Techniques for Attacking Neural Networks
14
What Can You Attack with Adversarial Examples?
15
Why Do Adversarial Masks work?
16
Adversarial Result
17
Adversarial Input Generation Techniques
18
White Box Adversarial Attack • Techniques
19
White Box Adversarial Attack Techniques
20
Black Box Adversarial Attack
21
Adversarial Patches
22
Defending against adversarial samples
23
Trojaning neural networks
24
Defending against trojans
25
Model Data Extraction
26
Training Data Extraction
27
Summary
Description:
Explore security vulnerabilities in AI assistant-based applications through this AppSecUSA 2018 conference talk by Abraham Kang. Delve into the world of intelligent assistants, learning how they can be compromised despite seemingly secure setups. Discover various attack vectors, including physical real-world attacks, splicing techniques, and future potential threats. Gain insights into the architecture of AI assistants, understanding slots and their vulnerabilities. Examine neural networks and techniques for attacking them, including adversarial examples, masks, and patches. Learn about white box and black box adversarial attacks, as well as methods for defending against these threats. Investigate trojaning neural networks, model and training data extraction, and receive a comprehensive summary of AI assistant security concerns. Equip yourself with the knowledge to identify and address vulnerabilities in AI assistant applications.

Security Vulnerabilities in AI Assistant Based Applications

OWASP Foundation
Add to list
0:00 / 0:00