Главная
Study mode:
on
1
Intro
2
Machine Learning Can Be Unreliable
3
Indeed: Machine Learning is Brittle
4
Backdoor Attacks
5
Key problem: Our models are merely (excellent!) correlation extractors Cats
6
Indeed: Correlations can be weird
7
Simple Setting: Background bias
8
Do Backgrounds Contain Signal?
9
ImageNet-9: A Fine-Grained Study Xiao Engstrom Ilyas M 2020
10
Adversarial Backgrounds
11
Background-Robust Models?
12
How Are Datasets Created?
13
Dataset Creation in Practice
14
Consequence: Benchmark-Task Misalignment
15
Prerequisite: Detailed Annotations
16
Ineffective Data Filtering
17
Multiple objects
18
Human-Label Disagreement
19
Human-Based Evaluation
20
Human vs ML Model Priors
21
Consequence: Adversarial Examples Illyas Santurkar Tsipras Engstrom Tran M 2019 (Standard) models tend to lean on "non-robust" features + Adversarial perturbations manipulate these features
22
Consequence: Interpretability
23
Consequence: Training Modifications
24
Robustness + Perception Alignment
25
Robustness + Better Representations
26
Counterfactual Analysis with Robust Models
27
ML Research Pipeline
Description:
Explore the intricacies of machine learning model behavior in this thought-provoking lecture by MIT Professor Aleksander Madry. Delve into the alignment between benchmark-driven ML paradigms and real-world applications, examining biases in datasets like ImageNet and how state-of-the-art models exploit them. Discover how these biases stem from data collection and curation processes, and learn to quantify them using standard tools. Gain insights into the challenges of deploying reliable and responsible AI in real-world scenarios, covering topics such as background bias, adversarial examples, and the consequences of benchmark-task misalignment. Understand the implications for model interpretability, training modifications, and the overall machine learning research pipeline.

Why Do Our Models Learn?

MITCBMM
Add to list
0:00 / 0:00