Главная
Study mode:
on
1
Intro
2
Outline
3
Generations of Computing
4
Machine Learning Lifecycle
5
Complexities of Model Serving
6
Kubernetes at the Edge
7
Introducing KServe
8
Easy to Use Interfaces
9
Kserve Standardized Inference Protocol
10
Enter ModelMesh
11
ModelMesh Architecture
12
ModelMesh Serving Runtimes
13
ModelMesh On Edge?
14
Example Deployment
15
Higher Density Deployment
16
Challenges
Description:
Explore the challenges and solutions for deploying AI models on edge devices in this conference talk. Discover how ModelMesh, combined with K3s and MicroShift technologies, can simplify model serving at the edge. Learn about the multi-model serving backend of KServe and how it offers a small-footprint control-plane for managing model deployments on Kubernetes. Understand how ModelMesh utilizes multi-model runtimes with intelligent model loading/unloading to maximize limited resources while serving multiple models for inference. Gain insights into the generations of computing, machine learning lifecycle, complexities of model serving, and Kubernetes at the edge. Explore KServe's easy-to-use interfaces and standardized inference protocol. Dive into the ModelMesh architecture, serving runtimes, and its application on edge devices. Examine an example deployment and learn about higher density deployment challenges.

Model Serving at the Edge - Challenges and Solutions with ModelMesh

CNCF [Cloud Native Computing Foundation]
Add to list
0:00 / 0:00