Главная
Study mode:
on
1
Intro
2
Presentation Outline
3
Sales Engagement Platform (SEP)
4
ML/NLP/Al Roles in Enterprise Sales Scenarios
5
Implementation Challenges: the Digital Divide
6
Dev-Prod Divide
7
Dev-Prod Differences
8
Arbitrary Uniqueness
9
A Use Case: Guided Engagement
10
Six Stages of ML Full Life Cycle
11
Model Development and Offline Experimentation
12
Creating a transformer flavor model
13
Saving and Loading Transformer Artifacts
14
Productionizing Code and Git Repos
15
Flexible Execution Mode
16
Models: trained, wrapped, private-wheeled
17
Model Registry to Track Deployed Model Provenance
18
Conclusions and Future Work
Description:
Explore the implementation of deep transformer NLP models for enterprise AI scenarios using MLflow and AWS Sagemaker in this 23-minute presentation by Databricks. Learn about a publishing/consuming framework for managing data, models, and artifacts across machine learning stages, a new MLflow model flavor supporting deep transformer models, and a design pattern decoupling model logic from deployment configurations. Discover how to create a CI/CD pipeline for continuous integration and delivery of models into a Sagemaker endpoint, serving production usage. Gain insights into overcoming challenges in operationalizing these models with production-quality end-to-end pipelines covering the full machine learning lifecycle. Understand the application of these techniques in guided sales engagement scenarios at Outreach.io, and benefit from shared experiences and lessons learned in enterprise AI implementation and digital transformation.

Delivery of Deep Transformer NLP Models Using MLflow and AWS SageMaker for Enterprise AI

Databricks
Add to list
0:00 / 0:00