Explore a comprehensive tutorial lecture on falsifiable interpretability research in machine learning for computer vision. Delve into key concepts including saliency, input invariants, model parameter randomization, and the impact of silencing on human understanding. Examine case studies on individual neurons, activation maximization, and selective units. Learn about building stronger hypotheses and gain valuable insights into the challenges and potential solutions in interpretable machine learning. Discover techniques for regularizing selectivity in generative models and understand the importance of developing robust, testable hypotheses in this field.
Towards Falsifiable Interpretability Research in Machine Learning - Lecture