Главная
Study mode:
on
1
Intro
2
Deep Learning in productions
3
Observations: Low utilization
4
Opportunities
5
Outline
6
Dynamic scaling memory
7
Dynamic scaling computation Exclusive mode
8
AntMan architecture
9
Micro-benchmark: Memory grow-shrink
10
Micro-benchmark: Adaptive computation
11
Trace experiment
12
Large-scale experiment
13
Conclusion AntMan: Dynamic Scaling on GPU Clusters for Deep Learning
Description:
Explore a conference talk on AntMan, a deep learning infrastructure designed to efficiently manage and scale GPU resources for complex deep learning workloads. Discover how this system, deployed at Alibaba, improves GPU utilization by dynamically scaling memory and computation within deep learning frameworks. Learn about the co-design of cluster schedulers with deep learning frameworks, enabling multiple jobs to share GPU resources without compromising performance. Gain insights into how AntMan addresses the challenges of fluctuating resource demands in deep learning training jobs, resulting in significant improvements in GPU memory and computation unit utilization. Understand the unique approach to efficiently utilizing GPUs at scale, which has implications for job performance, system throughput, and hardware utilization in large-scale deep learning environments.

AntMan - Dynamic Scaling on GPU Cluster for Deep Learning

USENIX
Add to list
0:00 / 0:00