Главная
Study mode:
on
1
- Intro
2
- How we got here
3
- What would it take to build an AI bench scientist
4
- The setup
5
- The challenge of nonlinearity
6
- No causal representations without assumptions
7
- Time contrastive learning
8
- Switchover: Dhanya Sridhar
9
- What other learning signals can we use?
10
- Tree-based regularization
11
- Sparse mechanisms
12
- Multiple views and sparsity
13
- Concluding questions
Description:
Explore the emerging field of Causal Representation Learning (CRL) in this comprehensive tutorial. Delve into the core technical problems and assumptions driving CRL, which aims to learn causal models and mechanisms from low-level observations like text, images, or biological measurements. Gain strong intuitions about the challenges of nonlinearity and the necessity of assumptions in causal representations. Discover various learning signals, including time contrastive learning, tree-based regularization, and sparse mechanisms. Examine the potential applications of CRL in scientific discovery and AI development. Conclude with a discussion on open questions and future directions for this exciting area of research.

A Tutorial on Causal Representation Learning

Valence Labs
Add to list
0:00 / 0:00