Главная
Study mode:
on
1
Intro
2
Language models are statistical models of text
3
But "statistical model" gives bad intuition
4
Prompts are magic spells
5
Prompts are portals to alternate universes
6
A prompt can make a wish come true
7
A prompt can create a golem
8
Limitations of LLMs as simulators
9
Prompting techniques are mostly tricks
10
Few-shot learning isn't the right model for prompting
11
Character-level operations are hard
12
The prompting playbook: reasoning, reflection, & ensembling
Description:
Explore the world of prompt engineering in this 52-minute video lecture from The Full Stack's LLM Bootcamp. Gain high-level intuitions and a default playbook for prompting language models, examining two contrasting perspectives: "language models as statistical models of text" and "prompts as magic spells." Delve into various prompting techniques, including decomposition, reasoning, and reflection. Learn about the limitations of LLMs as simulators, the challenges of character-level operations, and why few-shot learning may not be the ideal model for prompting. Discover how prompts can act as portals to alternate universes, make wishes come true, and even create golems. By the end, master the prompting playbook that incorporates reasoning, reflection, and ensembling techniques to effectively harness the power of language models.

Learn to Spell - Prompt Engineering

The Full Stack
Add to list
0:00 / 0:00