Главная
Study mode:
on
1
00:00 - Intro
2
00:45 - HyperStack GPUs! sponsored
3
02:23 - What is GPT-Fast?
4
08:40 - PyTorch compile
5
28:15 - int8 quantization
6
32:15 - Speculative Decoding
7
40:12 - Int 4 quantization
8
42:05 - Putting it all together, tensor parallelism
9
45:25 - Bonus optimizations
10
58:10 - Outro, questions
Description:
Dive into a comprehensive video lecture featuring Horace He discussing techniques for accelerating inference using PyTorch native operations. Explore the concept of GPT-Fast, delve into PyTorch compile, and understand int8 and int4 quantization methods. Learn about speculative decoding, tensor parallelism, and various bonus optimizations to enhance inference speed. Gain insights into putting these techniques together for blazingly fast inference in PyTorch, with a Q&A session concluding the talk.

GPT-Fast - Blazingly Fast Inference with PyTorch

Aleksa Gordić - The AI Epiphany
Add to list