The statistical problem is only a proxy Example: detection of the action giving a phone call
6
A conjecture about adversarial features
7
Spurious correlations
8
Past observations
9
Nature does not shuffle the data. We do!
10
Multiple environments
11
Negative mixtures matter! Consider a search engine query classification problem
12
Learning stable properties
13
Invariance buys extrapolation powers
14
Trivial existence cases
15
Playing with the function family
16
Invariant representation
17
Finding the relevant variables
18
Invariance and causation
19
Invariance for causal inference
20
Invariant causal prediction
21
Adversarial Domain Adaptation
22
4- Robust supervised learning
23
The linear least square case
24
Issues
25
Characterization of the solutions
26
Rank of the feature matrix S
27
Exact recovery of high rank solutions Two set of environments
28
Nonlinear version
29
Colored MNIST
30
Scaling up invariant regularization
31
Phenomenon and interpretation
Description:
Explore a comprehensive lecture on learning representations using causal invariance delivered by Leon Bottou from Facebook AI Research at the Institute for Advanced Study's Workshop on Theory of Deep Learning. Delve into the challenges of machine learning, examining statistical problems as proxies and the impact of spurious correlations. Investigate the concept of multiple environments, the importance of negative mixtures, and the power of invariance in extrapolation. Analyze the relationship between invariance and causation, exploring applications in causal inference and adversarial domain adaptation. Examine robust supervised learning techniques, including linear least squares and nonlinear versions, and discover how these concepts apply to real-world scenarios such as colored MNIST datasets. Gain insights into scaling up invariant regularization and interpreting the underlying phenomena in this thought-provoking 33-minute presentation.
Learning Representations Using Causal Invariance - Leon Bottou