Главная
Study mode:
on
1
Building and Evaluating Prompts on Production Grade Datasets
Description:
Learn how to effectively construct and evaluate prompts for production-level Large Language Model (LLM) implementations in this 29-minute conference talk from the Toronto Machine Learning Series. Explore methodologies and techniques for creating production-style datasets specifically designed for LLM tasks, with a focus on conversational AI applications. Discover practical insights from Voiceflow's Lead of Agent Performance & ML Platform Bhuvana Adur Kannan and Machine Learning Engineer Yoyo Yang as they share their experiences in developing and deploying prompt-based features. Master the challenges of prompt engineering in production environments while gaining valuable lessons learned from real-world implementations.

Building and Evaluating Prompts on Production Grade Datasets for Conversational AI

Toronto Machine Learning Series (TMLS)
Add to list
0:00 / 0:00