Useful tool(?): Counterfactual Analysis with Robust Models
22
Adversarial examples arise from non-robust features in the data
Description:
Explore the intricacies of machine learning features in this thought-provoking lecture by Aleksander Madry from MIT. Delve into the success of deep learning and examine the key phenomenon of adversarial perturbations. Analyze machine learning through the lens of adversarial robustness, comparing human and ML model perspectives. Investigate the robust features model and its implications for data efficiency, perception alignment, and image synthesis. Discover how robustness can lead to better representations and explore the potential of counterfactual analysis with robust models. Gain insights into why adversarial examples arise from non-robust features in data and consider the broader implications for the field of machine learning.
Are All Features Created Equal? - Aleksander Madry