Главная
Study mode:
on
1
Introduction
2
What is an LLM
3
No one has all the answers
4
Oauth Top 10
5
Prompt Injection
6
Do Anything Mode
7
Bug Bounty Program
8
Data Leakage
9
Sandboxing
10
Running code
11
SSRF vulnerability
12
LLM generated content
13
Insufficient AI alignment
14
Data poisoning
15
Security challenges
16
Best practices
17
Real life examples
18
Resources
19
AI vs Human Resources
Description:
Explore the OWASP Top 10 security risks associated with Large Language Models (LLMs) in this comprehensive 39-minute DevSecCon talk. Delve into the rapidly evolving world of Generative AI and its potential impact on various industries. Learn about critical concepts such as prompt injection, data leakage, sandboxing, and insufficient AI alignment. Gain insights into best practices, real-life examples, and resources for managing LLM applications securely. Discover the challenges and opportunities presented by AI-generated content and compare AI capabilities with human resources. Equip yourself with essential knowledge for risk management and building robust security controls in the era of LLMs.

OWASP Top 10 Security Risks for Large Language Models

DevSecCon
Add to list