Главная
Study mode:
on
1
Intro - open pretrained transformer
2
Setup creating the cond env
3
Setup patch the code
4
Collecting train script arguments
5
Training script walk-through
6
Constructing a dummy task
7
Building the transformer model
8
CUDA kernels C++ code
9
Preparing a dummy dataset
10
Training loop
11
Zero grad loss scaling
12
Forward pass through a transformer
13
IMPORTANT loss, scaling, mixed precision, error handling
14
Outro
Description:
Dive deep into the metaseq codebase behind Meta's large language model OPT-175B in this comprehensive video tutorial. Learn how to set up the code on your machine and explore key concepts of mixed precision training, including loss scaling and unscaling. Follow along as the instructor walks through the training script, constructs dummy tasks and datasets, builds the transformer model, and examines CUDA kernels in C++ code. Gain insights into the training loop, forward pass through a transformer, and crucial aspects of loss handling, scaling, and mixed precision. Perfect for those looking to understand the intricacies of large language model implementation and training.

Open Pretrained Transformer - ML Coding Series

Aleksa Gordić - The AI Epiphany
Add to list