Главная
Study mode:
on
1
Intro
2
Tree-structured models
3
Language Modeling
4
LMs in The Dark Ages: n-gram models
5
Enlightenment era neural language models (NLMs)
6
GPT-2 language model (cherry-picked) output
7
Transformer models
8
Classic Word Vec (Mikolov et al. 2013)
9
Self-attention in (masked) sequence model
10
Good systems are great, but still basic NLU errors
11
What is Reasoning? Bottou 2011
12
Appropriate structural priors
13
Compositional reasoning tree
14
A 2020s Research Direction
15
A Neural State Machine
16
NSM accuracy on GQA
Description:
Explore the capabilities and limitations of language neural networks in reasoning during this 50-minute lecture by Chris Manning from Stanford University. Delve into the evolution of language models, from n-gram models to modern neural language models like GPT-2. Examine the structure and functionality of transformer models and self-attention mechanisms in sequence modeling. Analyze the strengths of current systems while acknowledging their basic natural language understanding errors. Investigate the concept of reasoning in AI, discussing appropriate structural priors and compositional reasoning trees. Discover emerging research directions, including Neural State Machines, and their potential to enhance AI reasoning abilities. Gain insights into the accuracy of Neural State Machines on visual question-answering tasks like GQA.

Knowledge Is Embedded in Language Neural Networks but Can They Reason?

Simons Institute
Add to list
0:00 / 0:00