Главная
Study mode:
on
1
Intro
2
Why DLP on Kubernetes
3
What we did
4
Why serverless
5
What' s changed in 2018/2019
6
Our current stack
7
Knative components
8
Our old architecture
9
Knative build - DLP example
10
Tektoncd-pipeline with buildkit
11
What is Knative serving
12
Knative serving - DLP example
13
Knative serving - Ingress
14
Knative serving -autoscale old solution
15
Knative serving - custom autoscale class
16
Knative serving - cold start
17
Knative with container instance - old solution
18
Compute and Network on Edge
19
Knative eventing - CloudEvents
Description:
Explore how Baidu leverages Knative to enhance their internal deep learning platform in this conference talk. Discover the implementation of workflow automation between training and inference services using Knative eventing, smart routing and auto-scaling with Knative serving, and training job image building with Knative build framework. Learn about the platform's evolution, including the expansion of eventing for pipeline automation, serving for improved inference services, and build for streamlined training image generation. Gain insights into Baidu's current stack, Knative components, and architectural changes that led to a 20% reduction in resource consumption. Delve into topics such as DLP on Kubernetes, serverless computing, Tektoncd-pipeline with buildkit, custom autoscale class, cold start optimization, edge computing, and CloudEvents integration.

Evolving Deep Learning Platform with Knative

Linux Foundation
Add to list
0:00 / 0:00