Главная
Study mode:
on
1
Introduction
2
Key Considerations
3
Performance
4
Useful Work
5
Frameworks
6
Reference Implementation
7
Orchestration
8
Accelerated Processing Kit
9
HPC Toolkit
10
Maloco Introduction
11
How Maloo Works
12
What Sets Maloo Apart
13
Maloo Scale
14
TPUs
15
Contextual AI
16
Pretraining ML
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore key considerations for choosing tensor processing units (TPUs) and graphics processing units (GPUs) for AI training workloads in this 44-minute session from Google Cloud Next 2024. Learn about the strengths of each accelerator for various workloads, including large language models and generative AI. Discover best practices for optimizing training workflows on Google Cloud using TPUs and GPUs. Understand performance and cost implications, along with strategies for cost optimization at scale. Dive into topics such as accelerated processing kits, HPC toolkits, and the Maloco framework. Gain insights on contextual AI, pretraining ML, and scaling TPUs for high-performance AI training.

Accelerate AI Training Workloads with Google Cloud TPUs and GPUs

Google Cloud Tech
Add to list