Главная
Study mode:
on
1
Intro
2
BentoML deployment steps
3
Installing BentoML and other requirements
4
Training a simple ConvNet model on MNIST
5
Saving Keras model to BentoML local store
6
Creating BentoML service
7
Sending requests to BentoML service
8
Creating a bento
9
Serving a model through a bento
10
Dockerise a bento
11
Run BentoML service via Docker
12
Deployment options: Kubernetes + Cloud
13
Outro
Description:
Learn how to deploy Machine Learning models into production using BentoML in this comprehensive tutorial video. Explore the installation process for BentoML, save ML models to BentoML's local store, create a BentoML service, build and containerize a bento with Docker, and send requests to receive inferences. Follow along as the instructor demonstrates training a simple ConvNet model on MNIST, saving a Keras model, and running a BentoML service via Docker. Gain insights into deployment options such as Kubernetes and Cloud platforms, and access accompanying code on GitHub for hands-on practice.

How to Deploy ML Models in Production with BentoML

Valerio Velardo - The Sound of AI
Add to list