Главная
Study mode:
on
1
Introduction
2
Outline
3
Obstacles
4
Misdirection of Saliency
5
What is Saliency
6
Saliency axioms
7
Input invariants
8
Model parameter randomization
9
Does silencing help humans
10
Takeaways
11
Case Study 2
12
Individual neurons
13
Activation maximization
14
Populations
15
Selective units
16
Ablating selective units
17
Posthoc studies
18
Regularizing selectivity
19
Ingenerative models
20
Summary
21
Building better hypothesis hypotheses
22
Building a stronger hypothesis
23
Key takeaways
Description:
Explore a comprehensive tutorial lecture on falsifiable interpretability research in machine learning for computer vision. Delve into key concepts including saliency, input invariants, model parameter randomization, and the impact of silencing on human understanding. Examine case studies on individual neurons, activation maximization, and selective units. Learn about building stronger hypotheses and gain valuable insights into the challenges and potential solutions in interpretable machine learning. Discover techniques for regularizing selectivity in generative models and understand the importance of developing robust, testable hypotheses in this field.

Towards Falsifiable Interpretability Research in Machine Learning - Lecture

Bolei Zhou
Add to list
0:00 / 0:00