Главная
Study mode:
on
1
Intro
2
The credit assignment problem
3
The solution in artificial networks: backprop
4
Why Isn't Backprop "Biologically Plausible"?
5
Neuroscience Evidence for Backprop in the Brain?
6
A spectrum of credit assignment algorithms
7
How to convince a neuroscientist that the cortex is learning via [something like] backprop
8
What about reinforcement learning?
9
A Single Trial of Reinforcement Learning
10
Measuring Outcomes
11
Update Parameters with the Policy Gradient
12
Training Neural Networks with Policy Gradients
13
The backpropagation solution (AKA 'Weight transport)
14
Feedback alignment
15
Energy based models.
16
Question
17
Constraints on learning rules.
18
Target propagation
19
Gradient free DTP variants
20
Performance on ImageNet
21
New Models of a Neuron
22
Future Directions
23
Difference target-propagation (DTP)
Description:
Explore a comprehensive lecture on backpropagation and deep learning in the brain, delivered by Timothy Lillicrap from DeepMind Technologies Limited. Delve into the credit assignment problem and its solution in artificial networks, examining the biological plausibility of backpropagation. Investigate neuroscientific evidence supporting backprop-like learning in the brain and analyze a spectrum of credit assignment algorithms. Learn how to convince neuroscientists that the cortex employs backprop-like learning mechanisms. Examine reinforcement learning, including single-trial scenarios, outcome measurements, and parameter updates using policy gradients. Compare various approaches such as feedback alignment, energy-based models, and target propagation. Discover new neuron models and explore future directions in the field of computational neuroscience and deep learning.

Backpropagation and Deep Learning in the Brain

Simons Institute
Add to list