Learn how to deploy Machine Learning models into production using BentoML in this comprehensive tutorial video. Explore the installation process for BentoML, save ML models to BentoML's local store, create a BentoML service, build and containerize a bento with Docker, and send requests to receive inferences. Follow along as the instructor demonstrates training a simple ConvNet model on MNIST, saving a Keras model, and running a BentoML service via Docker. Gain insights into deployment options such as Kubernetes and Cloud platforms, and access accompanying code on GitHub for hands-on practice.
How to Deploy ML Models in Production with BentoML