Главная
Study mode:
on
1
Intro
2
ML Cycle
3
Data Monitoring
4
KS Test
5
Categorical Features
6
Oneway Chisquared
7
Monitoring Tests
8
Tools
9
ML Flow
10
Notebooks
11
ML Workflow
12
ML Flow Delta
13
Other Notebooks
14
Widgets
15
Notebook Setup
16
Train Cycle Learn Pipeline
17
Data Logging
18
ML Flow Run
19
ML Flow Model Registry
20
ML Flow Experiment
21
Model Registry
22
New Data
23
Feature Checks
24
ML Flow Registry
25
Null Proportion
26
New Incoming Data
27
Chisquared Test
28
Action
29
Model Parameters
30
Model Staging
31
Model Migration
32
Missingness Check
33
Price Check
34
Categorical Check
35
Recap
Description:
Explore a comprehensive 55-minute conference talk on testing machine learning models in production. Learn about core statistical tests and metrics for detecting data and concept drift, preventing models from becoming stale and detrimental to business. Dive deep into implementing robust testing and monitoring frameworks using open-source tools like MLflow, SciPy, and statsmodels. Gain valuable insights from Databricks' customer experiences and discover key tenets for testing model and data validity in production. Walk through a generalizable demo utilizing MLflow to enhance reproducibility. Cover topics including the ML cycle, data monitoring, KS tests, categorical features, one-way chi-squared tests, monitoring tools, MLflow notebooks, ML workflows, data logging, model registry, feature checks, and model staging and migration.

Testing ML Models in Production - Detecting Data and Concept Drift

Databricks
Add to list
0:00 / 0:00