Главная
Study mode:
on
1
Introduction
2
Sequence Translation
3
Learning to Execute
4
Language Modeling
5
Experiments
6
Computational Hierarchy
7
Data Efficiency
8
Recurrent Networks
9
Attention
10
Attention to Sequence
11
Limitations
12
Readonly memory
13
Turing machine
14
Pushdown automata
15
Architectural Bias
16
Conclusion
Description:
Explore the intersection of recurrent neural networks and traditional models of computation in this insightful talk by Edward Grefenstette from DeepMind. Delve into the analysis of various recurrent architectures, comparing simpler models to finite state automata and examining how memory-enhanced structures improve algorithmic efficiency. Investigate sequence translation, learning to execute, language modeling, and experiments in computational hierarchy. Discover the role of attention mechanisms, readonly memory, and architectural bias in neural networks. Gain a deeper understanding of the relationship between logic and learning in complex systems, and how combining these approaches can lead to powerful solutions in artificial intelligence and formal reasoning.

Recurrent Neural Networks and Models of Computation - Edward Grefenstette, DeepMind

Alan Turing Institute
Add to list
0:00 / 0:00