Главная
Study mode:
on
1
Introduction
2
Managing ML Model Performance is a huge problem
3
What we often hear
4
Why is frequent retraining not sufficient?
5
Why is alerting alone not sufficient?
6
Observe and iterate
7
Fundamental #1: Observe & Iterate
8
Example: Addressing concept drift
9
Example: Addressing data pipeline issue
10
Debug rapidly
11
Performance Debugging In Action
12
Data pipeline issue for Latitude Feature!
13
Test and Monitor LLMs
14
Key Takeways
Description:
Dive into a comprehensive 51-minute talk on building and maintaining high-performance AI models. Explore the challenges of developing and sustaining model performance in production environments, addressing issues like model decay and real-world changes. Learn about essential performance metrics, identifying model degradation, and tackling data and concept drift. Gain insights into systematic testing, debugging, and monitoring techniques for AI models. The lecture covers conceptual foundations and includes practical demonstrations using real models. Discover key topics such as optimal testing points in ML model development, types of performance and drift testing, and strategies for systematic model improvement. Follow along with a detailed breakdown of content, including examples of addressing concept drift and data pipeline issues, performance debugging in action, and considerations for testing and monitoring Large Language Models (LLMs).

Building and Maintaining High-Performance AI

Data Science Dojo
Add to list
0:00 / 0:00