Главная
Study mode:
on
1
Introduction
2
Why should we use latent variable models
3
Fitting variable models
4
Data hungry
5
Simple regression
6
Bayesian inference
7
Covariance
8
Covariance kernels
9
Spectral mixture kernels
10
Margin likelihood
11
Correlation kernel
12
Factor analysis
13
Collab notebook
14
Challenges
15
Bayesian GPFA
16
Data limitations
17
Results
Description:
Explore the intricacies of Gaussian process priors for neural data analysis in this comprehensive lecture. Delve into the importance of latent variable models, Bayesian inference, and covariance kernels in neural data analysis. Learn about factor analysis, spectral mixture kernels, and margin likelihood through practical examples and Colab notebooks. Discover the challenges and limitations of data-driven approaches, and examine the results of Bayesian GPFA. Gain insights from additional resources, including papers on Gaussian process factor analysis with dynamical structure and extensions to non-Euclidean manifolds. Understand how these techniques can be applied to real-world scenarios, such as analyzing hippocampal encoding in evidence accumulation tasks.

Learning What We Know and Knowing What We Learn - Gaussian Process Priors for Neural Data Analysis

MITCBMM
Add to list
0:00 / 0:00