Главная
Study mode:
on
1
Intro
2
Most AI Projects Never Make it to Production
3
Operationalizing Machine Learning is Challenging
4
Resource Intensive Processes, Data & Org Silos
5
Serverless Simplicity With Maximum Performance
6
Accelerate Development & Deployment With an Integrated Feature-Store
7
Churn Prediction Example: Raw Data Model
8
Feature Used For The Model (Example)
9
Implementing A SINGLE Feature Using SQL
10
Kappa Architecture - Intro
11
Serverless Stream Processing For Real-Time & Batch
12
Faster development to production through MLOps & Serverless automation
13
Rapid Deployment of Real-Time Serverless Pipelines
14
Glue-less Model Monitoring and Governance
15
ML Pipeline Example: Predicting Financial Fraud
16
MLOps for Good Hackathon
Description:
Explore the challenges and solutions for building real-time machine learning pipelines in this 43-minute conference talk. Learn how to handle high-velocity, high-volume data for applications like fraud prediction, predictive maintenance, and customer churn prevention. Discover techniques for online and offline feature engineering, ML calculations in development and production environments, and effective monitoring of AI applications to detect and mitigate drift. Gain insights from real customer case studies and understand how to implement serverless architectures for simplified, high-performance ML pipelines. Delve into topics such as integrated feature stores, Kappa architecture, and MLOps automation to accelerate development and deployment of real-time AI solutions.

Building Real-Time ML Pipelines the Easy Way

Open Data Science
Add to list
0:00 / 0:00