run an sql query that deletes all records in the database
6
building our won llm vulnerability scanner to audit and secure ai applications
7
about sophie and joshua
8
use cases of llms
9
llm security
10
overreliance
11
model denial of service
12
training data poisoning
13
prompt injection
14
building our own llm vulnerability scanner
15
self-hosted llm setup
16
coding the cli tool
17
the end
Description:
Discover how to build an LLM vulnerability scanner to enhance the security of AI applications in this conference talk from Conf42 LLMs 2024. Explore the potential risks associated with Large Language Models, including overreliance, model denial of service, training data poisoning, and prompt injection. Learn about self-hosted LLM setups and follow along as the speakers demonstrate the process of coding a CLI tool for vulnerability scanning. Gain valuable insights into LLM security and practical strategies for auditing and securing AI applications.
Building an LLM Vulnerability Scanner to Secure AI Applications