Explore the latest advancements in PyTorch and MLFlow for scaling AI research to production in this 44-minute video presentation by Databricks. Dive deep into crucial developments, including model parallel distributed training, model optimization, and on-device deployment. Learn about the newest libraries supporting production-scale deployment in conjunction with MLFlow. Discover how PyTorch's evolution since version 1.0 has accelerated the workflow from research to production. Gain insights into topics such as simplicity over complexity, community involvement, papers with code, challenges in AI development, and code walkthroughs. Understand the importance of model size and compute needs, exploring techniques like pruning and quantization. Examine strategies for training models at scale, deploying on heterogeneous hardware, and managing large models. Delve into remote procedure calls, API overviews, and deployment at scale using PyTorch Service and MLFlow. Stay updated on PyTorch's latest features and domain-specific libraries. Find resources for further education, including books and channels, to enhance your AI research and production skills.
Read more
Scaling Up AI Research to Production with PyTorch and MLFlow