Главная
Study mode:
on
1
Intro
2
How does connectivity shape activity?
3
Combinatorial Threshold-Linear Networks (CTLNs)
4
A diversity of dynamical behaviour
5
Dynamic attractors "live around fixed points"
6
Graph structure and CTLN fixed points
7
Nerves: divide and conquer
8
Directional graphs Agraph G is directional there is a partition of its nodes V
9
DAG decompositions
10
Directional graphs and feed-forward networks
11
Directional covers and their nerves
12
Basic examples
13
Theorem (DAG decomposition)
14
5-clique chain example Graph G
15
Theorem (cycle nerve)
16
Grid graph
17
Network engineering: Grid as a nerve
18
Dynamical prediction
19
Summary
20
Thank you for listening
21
Iterating the construction
Description:
Explore the relationship between network connectivity and neural activity in this 54-minute lecture on nerve theorems for fixed points of neural networks. Delve into the world of threshold linear networks (TLNs) and combinatorial threshold-linear networks (CTLNs), examining how graph structure influences network dynamics. Learn about a novel method of covering CTLN graphs with smaller directional graphs and discover how the nerve of the cover provides insights into fixed points. Understand the power of three "nerve theorems" in constraining network fixed points and effectively reducing the dimensionality of CTLN dynamical systems. Follow along as the speaker illustrates these concepts with examples, including DAG decompositions, cycle nerves, and grid graphs. Gain valuable insights into computational neuroscience and applied algebraic topology as you uncover the intricate connections between graph theory and neural network behavior.

Nerve Theorems for Fixed Points of Neural Networks

Applied Algebraic Topology Network
Add to list