Главная
Study mode:
on
1
Introduction
2
Material
3
Underlying Technology
4
Primary Stability
5
Other Parameters
6
Methodology
7
Training Curves
8
Summary
9
Intensive vs Extensive Properties
10
Extensive vs Intensive Properties
11
The Plan
12
Example
13
General Tuning
14
Experimental Results
15
BIRD
16
Evaluation Results
17
Vertical Foundation
18
Primarization
19
Theory of Everything
Description:
Explore the groundbreaking technique of tuning GPT-3 hyperparameters on a single GPU through zero-shot hyperparameter transfer in this MIT seminar. Delve into the maximal update parametrization (µP) concept, which allows narrow and wide neural networks to share optimal hyperparameters. Learn how this method enabled tuning of the 6.7 billion parameter GPT-3 version using only 7% of its pretraining compute budget. Discover the theoretical foundations behind µP's unique properties and its connection to infinite-width neural networks and Tensor Programs theory. Gain insights from Greg Yang, a Microsoft Research scientist with a distinguished academic background, as he presents findings based on his research paper. Suitable for both general machine learning practitioners and those interested in theoretical aspects of neural networks.

Tuning GPT-3 on a Single GPU via Zero-Shot Hyperparameter Transfer

Massachusetts Institute of Technology
Add to list
0:00 / 0:00