Distribution Strategy API High Level API to distribute your training.
10
# Training with Estimator API
11
# Training on multiple GPUs with Distribution Strategy
12
Mirrored Strategy
13
Demo Setup on Google Cloud
14
Performance Benchmarks
15
N A simple input pipeline for ResNet58
16
Input pipeline as an ETL Process
17
Input pipeline bottleneck
18
Parallelize file reading
19
Parallelize sap for transformations
20
Pipelining with prefetching
21
Using fused transformation ops
22
Work In Progress
23
TensorFlow Resources
Description:
Learn how to efficiently scale machine learning model training across multiple GPUs and machines using TensorFlow's distribution strategies in this 35-minute Google I/O '18 conference talk. Explore the Distribution Strategy API, which enables distributed training with minimal code changes. Discover techniques for data parallelism, synchronous and asynchronous parameter updates, and model parallelism. Follow a demonstration of setting up distributed training on Google Cloud and examine performance benchmarks for ResNet50. Gain insights into optimizing input pipelines, including parallelizing file reading and transformations, pipelining with prefetching, and using fused transformation ops. Access additional resources and performance guides to further enhance your distributed TensorFlow training skills.