Главная
Study mode:
on
1
Intro
2
Could a purely self-supervised Foundation Model achieve grounded language understanding?
3
Could a Machine Think? Classical Al is unlikely to yield conscious machines, systems that mimic the brain might
4
A quick summary of "Could a machine think?"
5
Foundation Models (FMs)
6
Self-supervision
7
Two paths to world-class Al chess?
8
Conceptions of semantics
9
Bender & Koller 2020: Symbol streams lack crucial information
10
Multi-modal streams
11
Metaphysics and epistemology of understanding
12
Behavioral testing: Tricky with Foundation Models
13
Internalism at work: Causal abstraction analysis
14
Findings of causal abstraction in large networks
Description:
Explore a thought-provoking lecture by Stanford University Professor Chris Potts examining the potential for purely self-supervised foundation models to achieve grounded language understanding. Delve into topics including classical AI approaches, brain-mimicking systems, conceptions of semantics, and the challenges of behavioral testing for foundation models. Analyze the metaphysics and epistemology of understanding, and discover findings on causal abstraction in large networks. Gain insights into cutting-edge AI research and its implications for language comprehension and artificial intelligence development.

Could a Purely Self-Supervised Foundation Model Achieve Grounded Language Understanding?

Santa Fe Institute
Add to list
00:00
-00:17