Главная
Study mode:
on
1
Introduction
2
Agenda
3
Simplicity over Complexity
4
Community
5
Papers with Code Calm
6
Facebook
7
Challenges
8
Dev Acts
9
Code Walkthrough
10
PyTorch Libraries
11
Model Size and Compute Needs
12
Pruning
13
Quantization
14
Quantization API
15
Quantization Results
16
Training Models at Scale
17
Deploy Heterogeneous Hardware
18
Adhoc Jobs
19
PyTorch Elastic
20
Large Models
21
Remote Procedure Call
22
API Overview
23
Deployment at Scale
24
PyTorch Service
25
MLFlow
26
PyTorch Update
27
Domain Libraries
28
Getting Educated
29
Books
30
Channels
Description:
Explore the latest advancements in PyTorch and MLFlow for scaling AI research to production in this 44-minute video presentation by Databricks. Dive deep into crucial developments, including model parallel distributed training, model optimization, and on-device deployment. Learn about the newest libraries supporting production-scale deployment in conjunction with MLFlow. Discover how PyTorch's evolution since version 1.0 has accelerated the workflow from research to production. Gain insights into topics such as simplicity over complexity, community involvement, papers with code, challenges in AI development, and code walkthroughs. Understand the importance of model size and compute needs, exploring techniques like pruning and quantization. Examine strategies for training models at scale, deploying on heterogeneous hardware, and managing large models. Delve into remote procedure calls, API overviews, and deployment at scale using PyTorch Service and MLFlow. Stay updated on PyTorch's latest features and domain-specific libraries. Find resources for further education, including books and channels, to enhance your AI research and production skills. Read more

Scaling Up AI Research to Production with PyTorch and MLFlow

Databricks
Add to list