Главная
Study mode:
on
1
Intro
2
Modern experiments capture a large range of timescales in neural data
3
We apply standard tensor decomposition methods to extract these components
4
PCA fails to recover network parameters from simulated data
5
Seminal theorem (Kruskal, 1977) proves that linear independence is a sufficient condition for tensor decomposition identifiability
6
Application 11: How does a model network learn a sensory discrimination task?
7
Gain modulation is a compact and accurate model of the network activity over all trials
8
How does prefrontal cortex encode place, actions, and rewards during maze navigation?
9
TCA (gain modulation) is a very compact and accurate model for trial-to-trial variability
10
PCA components encode complex mixtures of task variables
Description:
Explore dimensionality reduction techniques for matrix- and tensor-coded data in this comprehensive lecture by Alex Williams from Stanford University. Delve into matrix and tensor factorizations, including PCA, non-negative matrix factorization (NMF), independent components analysis (ICA), and canonical polyadic (CP) tensor decomposition. Learn how these methods compress large data tables and higher-dimensional arrays into more manageable representations, crucial for extracting scientific insights. Discover recent theoretical concepts and foundations, with a focus on CP tensor decomposition for higher-order data arrays. Examine practical applications in neuroscience, including the analysis of neural data across multiple timescales, sensory discrimination task learning, and prefrontal cortex encoding during maze navigation. Understand how tensor decomposition methods outperform PCA in recovering network parameters and modeling trial-to-trial variability. Gain insights into the Kruskal theorem and its implications for tensor decomposition identifiability. Read more

Dimensionality Reduction for Matrix- and Tensor-Coded Data

MITCBMM
Add to list