Главная
Study mode:
on
1
Introduction
2
The Advanced Pipeline
3
Challenges
4
Data Quality
5
Division of Labor
6
Data Science vs Software Engineering
7
Data Science Engineering Principles
8
Ian Good
9
Requirements engineering
10
Reproducible builds
11
mlflow
12
data scientist
13
Jupiter
14
Feature Catalog
15
Model Libraries
16
Model Optimization
17
Resource Service Management
18
Wishlist
Description:
Explore the intricacies of operating deep learning pipelines using Kubeflow in this comprehensive conference talk by Jörg Schad and Gilbert Song from Mesosphere. Dive into the process of building a production-grade data science pipeline, integrating Kubeflow with open-source data, streaming, and CI/CD automation tools. Learn about essential components such as data preparation using Apache Spark or Apache Flink, data storage with HDFS and Cassandra, automation via Jenkins, and request streaming with Apache Kafka. Discover how to construct and manage a complete deep learning pipeline for multiple tenants, covering topics like data cleansing, model storage, distributed training, monitoring, and infrastructure management. Gain insights into addressing challenges in data quality, division of labor between data scientists and software engineers, and implementing data science engineering principles. Explore advanced concepts including reproducible builds, MLflow integration, feature catalogs, model libraries, and resource service management to enhance your deep learning pipeline operations. Read more

Operating Deep Learning Pipelines Anywhere Using Kubeflow

CNCF [Cloud Native Computing Foundation]
Add to list