USENIX Security '24 - Formalizing and Benchmarking Prompt Injection Attacks and Defenses
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Learn about prompt injection attacks and defenses in this conference presentation from USENIX Security '24, where researchers from Penn State and Duke University present a comprehensive framework for understanding and evaluating these security threats. Explore how malicious instructions can be injected into LLM-Integrated Applications to manipulate outputs, and examine the systematic evaluation of 5 different attack methods and 10 defense strategies across 10 Large Language Models and 7 distinct tasks. Discover a new hybrid attack method that combines existing approaches, and gain access to an open-source platform for conducting further research in this emerging security field. The presentation addresses current limitations in prompt injection research by providing a formal framework and establishing a common benchmark for quantitative evaluation of future attacks and defenses.
Formalizing and Benchmarking Prompt Injection Attacks and Defenses