Главная
Study mode:
on
1
Intro
2
Outline
3
Standard assumptions
4
Adversarial Misspellings (Char-Level Attack)
5
Curated Training Task Fail to Represent Reality
6
Feedback Loops
7
Impossibility absent assumptions
8
Detecting and correcting for label shift with black box predictors
9
Motivation 1: Pneumonia prediction
10
Epidemic
11
Motivation 2: Image Classification
12
The test-Item effect
13
Domain Adaptation - Formal Setup
14
Label Shift (aka Target Shift)
15
Contrast with Covariate Shift
16
Black Box Shift Estimation (BBSE)
17
Confusion matrices
18
Applying the label shift assumption...
19
Consistency
20
Error bound
21
Detection
22
Estimation error (MNIST)
23
Black Box Shift Correction (CIFAR10 w IW-ERM)
24
A General Pipeline for Detecting Shift
25
Non-adversarial image perturbations
26
Detecting adversarial examples
27
Covariate shift + model misspecification
28
Implicit bias of SGD on linear networks w. linearly separable data
29
Impact of IW on ERM decays over MLP training
30
Weight-Invariance after 1000 epochs
31
L2 Regularization v Dropout
32
Deep DA / Domain-Adversarial Nets
33
Synthetic experiments
Description:
Explore the challenges and solutions in deep learning under distribution shift in this 50-minute lecture by Zack Lipton from Carnegie Mellon University. Delve into topics such as adversarial misspellings, feedback loops, and label shift detection. Learn about black box shift estimation and correction techniques, and examine their applications in pneumonia prediction and image classification. Investigate the impact of implicit bias in SGD, weight-invariance, and regularization methods on deep learning models. Gain insights into domain adaptation strategies and synthetic experiments that address distribution shift problems in real-world scenarios.

Robust Deep Learning Under Distribution Shift

Simons Institute
Add to list
0:00 / 0:00