Главная
Study mode:
on
1
Intro
2
ML Building Blocks
3
TensorFlow APIs
4
Why input pipeline?
5
tf.data: TensorFlow Input Pipeline
6
Input Pipeline Performance
7
Software Pipelining
8
Parallel Transformation
9
Parallel Extraction
10
tf.data Options
11
TFDS: TensorFlow Datasets
12
Why distributed training?
13
tf.distribute. Strategy API
14
How to use tf.distribute.Strategy?
15
Multi-GPU all-reduce sync training
16
All-Reduce Algorithm
17
Synchronous Training
18
Multi-GPU Performance ResNetso v1.5 Performance with
19
Multi-worker all-reduce sync training
20
All-reduce sync training for TPUs
21
Parameter Servers and Workers
22
Central Storage
23
Programming Model
24
What's supported in TF 2.0 Beta
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore best practices for tf.data and tf.distribute in this 46-minute TensorFlow presentation by Software Engineer Jiri Simsa. Dive into building efficient TensorFlow input pipelines, improving performance with the tf.data API, and implementing distributed training strategies. Learn about software pipelining, parallel transformation, and parallel extraction techniques. Discover the benefits of TensorFlow Datasets (TFDS) and various distributed training approaches, including multi-GPU all-reduce synchronous training and multi-worker setups. Gain insights into performance optimization for ResNet models, parameter servers, and central storage concepts. Understand the programming model and features supported in TensorFlow 2.0 Beta for distributed training.

Inside TensorFlow - tf.data + tf.distribute

TensorFlow
Add to list