Главная
Study mode:
on
1
Introduction
2
C410 classifier
3
What is seldomcore
4
Deploying a model
5
Extra complexity
6
Best practices
7
Benchmark types
8
Benchmark tools
9
Automating the evaluation
10
Workflow managers
11
Workflows
12
argo workflow
13
reusability
14
output
15
resources
16
Wrap up
Description:
Explore automated machine learning performance evaluation in this 26-minute conference talk from KubeCon + CloudNativeCon North America 2021. Dive into the intricacies of benchmarking deployed production machine learning models in cloud native infrastructure. Learn about the theory behind ML model benchmarking, including key parameters like latency, throughput, and performance percentiles. Follow a hands-on example using Argo, Kubernetes, and Seldon Core to benchmark a model across multiple parameters for optimal hardware performance. Gain insights into workflow management, reusability, and best practices for evaluating ML models in various deployment scenarios.

Automated Machine Learning Performance Evaluation

CNCF [Cloud Native Computing Foundation]
Add to list