Главная
Study mode:
on
1
Intro
2
Deep Learning
3
A Specialized Chip
4
Ambitious Target
5
Moores Law
6
How
7
Obstacles
8
Criteria
9
Problem
10
Architecture
11
Highlevel overview
12
Circuit level improvements
13
New memory technology
14
Wireless link
15
Thermals
16
Thermal multiplexing
17
External memory
18
TPU v1
19
TPU v2
20
Analog approaches
21
Computational graph
Description:
Explore a Stanford University seminar on petascale deep learning processor architecture, featuring Tapabrata Ghosh from Vathys.ai. Delve into innovative strategies for reducing data movement and improving efficiency in deep learning processors, including circuit-level innovations and comparisons with designs like Google's TPU. Learn about Vathys' approach to addressing computational bottlenecks, scalability for next-generation DL models, and the speaker's background in developing performant and power-efficient deep learning processors. Gain insights into topics such as Moore's Law, obstacles in chip design, architectural solutions, new memory technologies, wireless links, thermal management, and comparisons with other approaches like TPU v1, TPU v2, and analog designs.

Petascale Deep Learning on a Single Chip

Stanford University
Add to list
0:00 / 0:00