Главная
Study mode:
on
1
Intro
2
GPT sequence prediction
3
Prompt engineering
4
Demo: ChatGPT2
5
Processing text
6
Demo: Word2Vec dimensionality reduction
7
Transformer architecture
8
Demo: GPT2 input embedding
9
Self attention
10
Demo: GPT2 multi-head attention
11
Attention example
12
Demo: GPT2 next token prediction
13
Parameters
14
Thanks for explanations & inspiration
15
Outro
Description:
Dive deep into the architecture and inner workings of GPT algorithms and ChatGPT in this comprehensive conference talk from GOTO Amsterdam 2024. Explore fundamental concepts of natural language processing, including word embedding, vectorization, and tokenization. Follow along with hands-on demonstrations of training a GPT2 model to generate song lyrics, showcasing the internals of word sequence prediction. Examine larger language models like ChatGPT and GPT4, understanding their capabilities and limitations. Learn about hyperparameters such as temperature and frequency penalty, and see their effects on generated output. Gain practical insights into harnessing GPT algorithms for your own solutions through multiple demos covering ChatGPT2, Word2Vec dimensionality reduction, GPT2 input embedding, multi-head attention, and next token prediction. Discover how to leverage these powerful tools to create engaging and useful applications for your business.

Inside GPT - Large Language Models Demystified

GOTO Conferences
Add to list
0:00 / 0:00