Главная
Study mode:
on
1
- Intro
2
- What does it mean to make hardware for AI?
3
- Why were GPUs so successful?
4
- What is "dark silicon"?
5
- Beyond GPUs: How can we get even faster AI compute?
6
- A look at today's accelerator landscape
7
- Systolic Arrays and VLIW
8
- Reconfigurable dataflow hardware
9
- The failure of Wave Computing
10
- What is near-memory compute?
11
- Optical and Neuromorphic Computing
12
- Hardware as enabler and limiter
13
- Everything old is new again
14
- Where to go to dive deeper?
Description:
Dive into an in-depth interview with AI acceleration expert Adi Fuchs, exploring the landscape of modern AI acceleration technology. Gain insights into the success of GPUs, the concept of "dark silicon," and emerging technologies beyond traditional accelerators. Explore systolic arrays, VLIW, reconfigurable dataflow hardware, near-memory computing, optical and neuromorphic computing, and their impact on AI development. Understand how hardware acts as both an enabler and limiter in AI progress, and discover resources for further exploration of this rapidly evolving field.

All About AI Accelerators - GPU, TPU, Dataflow, Near-Memory, Optical, Neuromorphic & More

Yannic Kilcher
Add to list
0:00 / 0:00