Главная
Study mode:
on
1
Intro
2
Machine Learning: A Success Story
3
Why Do We Love Deep Learning?
4
Key Phenomenon: Adversarial Perturbations
5
ML via Adversarial Robustness Lens
6
But: "How"/"what" does not tell us "why"
7
Why Are Adv. Perturbations Bad?
8
Human Perspective
9
ML Perspective
10
A Simple Experiment
11
The Robust Features Model
12
The Simple Experiment: A Second Look
13
Human vs ML Model Priors
14
New capability: Robustification
15
Some Direct Consequences
16
Robustness and Data Efficiency
17
Robustness + Perception Alignment
18
Robustness → Better Representations
19
Robustness + Image Synthesis
20
Problem: Correlations can be weird
21
Useful tool(?): Counterfactual Analysis with Robust Models
22
Adversarial examples arise from non-robust features in the data
Description:
Explore the intricacies of machine learning features in this thought-provoking lecture by Aleksander Madry from MIT. Delve into the success of deep learning and examine the key phenomenon of adversarial perturbations. Analyze machine learning through the lens of adversarial robustness, comparing human and ML model perspectives. Investigate the robust features model and its implications for data efficiency, perception alignment, and image synthesis. Discover how robustness can lead to better representations and explore the potential of counterfactual analysis with robust models. Gain insights into why adversarial examples arise from non-robust features in data and consider the broader implications for the field of machine learning.

Are All Features Created Equal? - Aleksander Madry

Institute for Advanced Study
Add to list
0:00 / 0:00