Главная
Study mode:
on
1
[] Musical introduction to Rahul Parundekar
2
[] LLMs in Production Conference announcement
3
[] Purchase our Swag shirt!
4
[] Declarative Paradigm
5
[] Why now?
6
[] It's great for scalability
7
[] Most MLOps tools work well with K8s
8
[] Easy-deploys with tool-provided CRDs
9
[] Caveats
10
[] This talk
11
[] 3 Ways to Serve ML Models
12
[] Way 1: Serving a Model with an HTTP Endpoint
13
[] Way 2: Serving the Model with a Message Queue
14
[] Way 3: Long-running Task that Performs Batch Processing
15
[] Buil your own container
16
[] The main predictor 1/2: Singleton with load method
17
[] The main predictor 2/2: Predict
18
[] Way 1 5 steps
19
[] Way 2 2 steps
20
[] Way 3 2 steps
21
[] Tests: Sanity check for the model
22
[] Bringing it together: Entrypoint
23
[] Continuous Integration CI
24
[] Create docker-compose.yaml to make it easier for CI
25
[] On PR: Run tests with Github Actions
26
[] Branch-protection
27
[] On PR: Github Actions automatically runs our test
28
[] On PR: PRs can be then merged on approval
29
[] Container Repository
30
[] Continuous Integration CI
31
[] On merge to main
32
[] Actions that can constraint
33
[] TODO
34
[] Continuous Delivery
35
[] Argo CD
36
[] Image promotion with Kustomize
37
[]
38
[]
39
[]
40
[]
41
[]
42
[]
43
[]
44
[]
45
[]
46
[]
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore a comprehensive 59-minute conference talk on declarative MLOps and streamlining model serving on Kubernetes. Discover how to leverage native K8s operators for deploying models, learn best practices for containerizing models, and implement CI/CD using GitOps. Gain insights into three ways of serving ML models: HTTP endpoints, message queues, and batch processing. Dive into the process of building custom containers, creating predictors, and implementing continuous integration and delivery workflows. Learn about tools like Github Actions, Argo CD, and Kustomize for efficient MLOps practices. Understand the benefits of the declarative paradigm in MLOps, including scalability and compatibility with various tools. Perfect for data scientists and ML engineers looking to enhance their model deployment and management skills in production environments.

Declarative MLOps: Streamlining Model Serving on Kubernetes

MLOps.community
Add to list
0:00 / 0:00