Главная
Study mode:
on
1
Introduction
2
Who are we
3
Why is this important
4
Data Parallelization
5
Model Parallelization
6
Distributed Flow Training
7
Tensorflow Tools
8
Demos
9
Training Environment
10
Model Performance
11
Distributed Training
12
Distributed Training Results
13
Compute to Communication Ratio
14
Other Observations
15
How Can We Improve
16
Ubers
17
Cluster Performance
18
FreeFlow on CNI
19
GPU Resource Scheduler
20
Fast AI
Description:
Explore the democratization of machine learning on Kubernetes in this 38-minute Docker conference talk. Learn about data and model parallelization, distributed flow training, and TensorFlow tools. Discover training environments, model performance, and distributed training results. Examine compute-to-communication ratios and other observations. Investigate potential improvements, including Uber's cluster performance, FreeFlow on CNI, GPU resource scheduling, and Fast AI. Gain insights into the importance of making machine learning more accessible and efficient on Kubernetes platforms.

Democratizing Machine Learning on Kubernetes

Docker
Add to list