Главная
Study mode:
on
1
Introduction
2
Why ML
3
Why ML is hard
4
MLOps
5
Circle Detector
6
Wolf vs Husky Detector
7
Flaws in Federated Learning
8
Additional Techniques
9
Building a Pipeline
10
Extracting Your Model
11
Distillation Attack
12
Model Extraction Attack
13
Hidden Data Attack
14
Secret Memorization
15
Leakage Detection
16
Summary
17
Questions
18
AutoML
19
AI Models
20
Data Drift
21
Attack Systems
22
Differential Privacy
23
Threat Modeling
24
ML Ops
25
Outro
Description:
Explore the intersection of machine learning security and MLOps in this 51-minute conference talk. Delve into the challenges of ML implementation and learn how Kubeflow and MLOps practices can enhance the security of your machine learning workloads. Discover various ML models, including Circle Detector and Wolf vs Husky Detector, and examine potential flaws in federated learning. Gain insights into building secure pipelines, extracting models, and understanding different types of attacks such as distillation, model extraction, and hidden data attacks. Investigate techniques for secret memorization, leakage detection, and implementing differential privacy. Analyze the importance of threat modeling in ML systems and explore concepts like AutoML, AI models, and data drift. Conclude with a comprehensive summary and engage in a Q&A session to deepen your understanding of securing ML workloads through Kubeflow and MLOps strategies.

Securing ML Workloads with Kubeflow and MLOps - Pwned By Statistics

Linux Foundation
Add to list
0:00 / 0:00