Главная
Study mode:
on
1
Introduction and Welcome
2
The Challenges of LLM-Powered Applications
3
Case Study: Microsoft's Tay Bot
4
The Risks of LLMs in Real-World Applications
5
Testing LLM-Powered Applications
6
Security Concerns and Prompt Injection
7
Non-Determinism and Inaccuracy in LLMs
8
Building a Robust Test System
9
Types of Testing for LLMs
10
Metrics for Evaluating LLMs
11
Adversarial Testing and Auto Evaluation
12
Open Source Tools for LLM Testing
13
Conclusion and Final Thoughts
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore a comprehensive 19-minute conference talk from Conf42 Prompt Engineering 2024 that delves into the critical aspects of testing Large Language Model (LLM) powered applications. Learn from real-world examples, including the Microsoft Tay Bot case study, to understand the challenges and risks associated with deploying LLMs in production environments. Master essential testing methodologies, security considerations around prompt injection, and strategies for handling non-deterministic behaviors and inaccuracies inherent to language models. Discover how to build robust test systems, implement various testing types, and utilize key metrics for LLM evaluation. Gain insights into advanced testing approaches like adversarial testing and auto evaluation, while exploring available open-source tools for effective LLM testing.

Testing LLM-Powered Applications - Best Practices and Evaluation Methods

Conf42
Add to list
0:00 / 0:00