Главная
Study mode:
on
1
Intro
2
Deep Learning @ UBER
3
Self-Driving Vehicles
4
Trip Forecasting
5
Fraud Detection
6
Why Distributed Deep Learning?
7
How Distributed Deep Learning Works
8
Why Mesos?
9
Mesos Support for GPUs
10
Mesos Nested Containers
11
What is Missing?
12
Peloton Overview
13
Peloton Architecture
14
Elastic GPU Resource Management
15
Resource Pools
16
Gang Scheduling
17
Placement Strategies
18
Why TensorFlow?
19
Architecture for Distributed TensorFlow on Mesos
20
Can We Do Better?
21
Architecture for Horovod on Mesos
22
Distributed Training Performance with Horovod
23
What About Usability?
24
Giving Back
25
Thank you!
Description:
Explore distributed deep learning on Apache Mesos with GPU support and gang scheduling in this 37-minute conference talk from UBER engineers. Learn how to speed up complex model training, scale to hundreds of GPUs, and shard models that don't fit on a single machine. Discover the design and implementation of running distributed TensorFlow on Mesos clusters with hundreds of GPUs, leveraging key features like GPU isolation and nested containers. Gain insights into GPU and gang scheduling, task discovery, and dynamic port allocation. See real-world examples of distributed training speed-ups using a TensorFlow model for image classification. Delve into UBER's deep learning applications in self-driving vehicles, trip forecasting, and fraud detection. Understand the architecture of Peloton, UBER's cluster management system, and its features for elastic GPU resource management, resource pools, and placement strategies. Compare distributed TensorFlow and Horovod architectures on Mesos, and examine their performance benefits for large-scale deep learning tasks. Read more

Distributed Deep Learning on Apache Mesos with GPUs and Gang Scheduling

Linux Foundation
Add to list
0:00 / 0:00