Главная
Study mode:
on
1
Intro
2
It’s Just Adding One Word at a Time
3
How Big Is the Model?
4
Let's Try a Different Prompt
5
Where Do the Tokens Get Stored?
6
What about the Other Word Probabilities?
7
How Can You Build Larger Sentences?
8
Why Does It Seem to Get Stuck?
Description:
Explore the inner workings of large language models, particularly ChatGPT, in this 20-minute video from Wolfram. Delve into the process of how ChatGPT generates text one word at a time, understand the model's size and complexity, and examine different prompts and their effects. Learn about token storage, word probability distribution, sentence construction, and potential limitations of the model. Gain insights into the technical aspects of AI language processing through this informative discussion, which covers topics such as model architecture, token handling, and the challenges of maintaining coherence in longer outputs.

What Is ChatGPT Doing? Understanding Large Language Models - Episode 2

Wolfram
Add to list
00:00
00:00