Главная
Study mode:
on
1
Intro
2
Multiscale Predictive Cognitive Maps In Hippocampal & Prefrontal hierarchies
3
Cognitive Maps Learned representations of relational structures For goal-directed multistep planning & inference
4
Conditions
5
GPT-4 32K vs. GPT-3.5 Turbo Temperature 0, .5, 1
6
GPT-4 32K is comfortable with deeper trees
7
GPT-4 fails shortest path in graphs with dense community structure & sometimes hallucinates edges
8
Can chain of thought prompts (COT) Improve LLMs' cog map performance?
9
In Cog & neuro-sciences Errors & response latencies are windows into minds & brains & AI/LLMs?
10
LLMs are not comparable to one person Specific latent states in response to prompt may appear so But they don't qualify for mental life
Description:
Explore a conference talk examining cognitive maps in large language models, focusing on multiscale predictive representations in hippocampal and prefrontal hierarchies. Delve into the comparison between GPT-4 32K and GPT-3.5 Turbo under various temperature settings, analyzing their performance in graph navigation and shortest path problems. Investigate the potential of chain of thought prompts to enhance LLMs' cognitive map capabilities. Consider the implications of errors and response latencies in understanding AI systems, while acknowledging the fundamental differences between LLMs and human cognition.

Cognitive Maps in Large Language Models - Multiscale Predictive Representations

Santa Fe Institute
Add to list
0:00 / 0:00