Explore a novel approach to deep neural network interpretability in this 51-minute Stanford University lecture. Delve into the concept of regularizing deep models for better human understanding, focusing on medical prediction tasks in critical care and HIV treatment. Learn about the challenges of interpretability, various approaches to questioning models, and the idea of human simulation. Examine tree regularization techniques, including regional tree regularization, and their application to real-world datasets like MIMIC III. Discover how to evaluate interpretability metrics and understand the caveats of regularizing for interpretability. Gain insights into the speaker's research on deep generative models and unsupervised learning algorithms, with applications in education and healthcare.
Optimizing for Interpretability in Deep Neural Networks - Mike Wu