Главная
Study mode:
on
1
Bridging the data gap between LLMs and children
Description:
Explore the "data gap" between Large Language Models (LLMs) and human children in this 48-minute talk by Michael Frank from Stanford University. Delve into the reasons behind LLMs requiring 3-5 orders of magnitude more training data than children, examining perspectives such as innate knowledge, active and social learning, multimodal information, and evaluation differences. Gain insights into new data on multimodal input richness and the consequences of evaluation disparities. Investigate how the cognitive science concept of competence/performance distinctions applies to LLMs, enhancing your understanding of higher-level intelligence from AI, psychology, and neuroscience perspectives.

Bridging the Data Gap Between LLMs and Children - Understanding Higher-Level Intelligence

Simons Institute
Add to list
0:00 / 0:00