Главная
Study mode:
on
1
Introduction
2
GPU V2
3
Tensorflow
4
Chaos
5
Models
6
Object Detection
7
Google Cloud Platform Notebook
8
The Interconnect
9
Pricing
10
Cost
11
eBay
12
Accuracy Boost
13
Data Power
14
Scalability
15
Complexities
16
Model Parallel
17
Masters of Law
18
Magenta Fraud
19
Training Transformer Model
20
Anjana
21
Demo
22
Feeds
23
Training Time
24
Learning Rate Schedule
25
Summary
Description:
Explore the technical details of Cloud TPU and Cloud TPU Pod in this 42-minute conference talk from Google I/O'19. Dive into the domain-specific architecture designed to accelerate TensorFlow training and prediction workloads, providing performance benefits for machine learning production use. Discover new TensorFlow features enabling large-scale model parallelism for deep learning training. Learn about GPU V2, object detection, Google Cloud Platform Notebook, interconnect technology, pricing considerations, and scalability. Gain insights into model parallel techniques, training transformer models, and optimizing learning rate schedules. Presented by Kaz Sato and Martin Gorner, this talk covers topics such as accuracy boosting, data power, complexities in AI supercomputing, and includes a demo showcasing the capabilities of Cloud TPU Pods for large machine learning problems.

Cloud TPU Pods - AI Supercomputing for Large Machine Learning Problems

TensorFlow
Add to list
0:00 / 0:00