[] Please like, share, and subscribe to our MLOps channels!
4
[] Security and vulnerabilities
5
[] Work at Cohere and OWASP
6
[] Previous work vs LLMs Companies
7
[] LLM vulnerabilities
8
[] Good qualities to combat prompt injection problems
9
[] Data lineage
10
[] Red teaming
11
[] Freakiest LLM vulnerabilities
12
[] Severe Autonomy Concerns
13
[] Hallucinations
14
[] Prompt injection
15
[] Vector attacks to be recognized
16
[] LLMs being customed
17
[] Security changes due to maturity
18
[] OWASP Top 10 for Large Language Model Applications
19
[] Gandalf game
20
[] Prompt injection attack
21
[] Overlapping security
22
[] Data poisoning
23
[] Toxic data for LLMs
24
[] Wrap up
Description:
Embark on a trailblazing odyssey for enhanced security in this one-hour podcast featuring Ads Dawson, Senior Security Engineer at Cohere. Explore the challenges and solutions in securing large language models (LLMs) and natural language programming APIs, covering threat modeling, data breach prevention, and defense strategies. Gain insights into the successful "OWASP Top 10 for Large Language Model Applications" project, co-founded by Ads, which identifies key vulnerabilities in the industry. Delve into insider news from the AI Village's 'Hack the Future' LLM Red Teaming event at Defcon31, and learn about the inaugural Generative AI Red Teaming showdown. Discover Ads' extensive experience in application, network infrastructure, and cybersecurity, spanning from startups to large enterprises, with a focus on LLM/AI Security, Web Application Security, and DevSecOps.
Guarding LLM and NLP APIs: A Trailblazing Odyssey for Enhanced Security - Podcast #190