Главная
Study mode:
on
1
- Introduction
2
- History and training
3
- Machine learning
4
- Prediction
5
- More complex predictions
6
- Making any shape with enough curves combined
7
- How to get different curves from one curve
8
- Weights and bias
9
- Activation functions
10
- Back propagation and gradient descent
11
- Visualization of layers working together
12
- What is GPT
13
- Attention
14
- Tokens
15
- GPT-2
16
- GPT-3
17
- zero, one and few shot
18
- What was it trained on
19
- ChatGPT
20
- Closing thoughts
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Dive into a comprehensive 1 hour 25 minute video lecture exploring the foundations of ChatGPT, machine learning, and neural networks. Begin with an introduction to the history and training of AI models, then progress through key concepts like prediction, complex curve combinations, weights and biases, activation functions, and back propagation. Examine the evolution of GPT models, including GPT-2 and GPT-3, and understand crucial elements such as attention mechanisms and tokenization. Conclude with an in-depth look at ChatGPT itself and final thoughts on its implications. Access additional resources, including a whiteboard visualization, research papers, and links to further learning materials on Azure, DevOps, and PowerShell.

Understanding ChatGPT: From Machine Learning to Language Models

John Savill's Technical Training
Add to list