Главная
Study mode:
on
1
Intro
2
The challenge of interpretability
3
Lots of different definitions and ideas
4
Asking the model questions
5
A conversation with the model
6
A case for human simulation
7
Simulatable?
8
Post-Hoc Analysis
9
Interpretability as a regularizer
10
Average Path Length
11
Problem Setup
12
Tree Regularization (Overview)
13
Toy Example for Intuition
14
Humans are context dependent
15
Regional Tree Regularization
16
Example: Three Kinds of Interpretability
17
MIMIC III Dataset
18
Evaluation Metrics
19
Results on MIMIC III
20
A second application: treatment for HIV
21
Distilled Decision Tree
22
Caveats and Gotchas
23
Regularizing for Interpretability
Description:
Explore a novel approach to deep neural network interpretability in this 51-minute Stanford University lecture. Delve into the concept of regularizing deep models for better human understanding, focusing on medical prediction tasks in critical care and HIV treatment. Learn about the challenges of interpretability, various approaches to questioning models, and the idea of human simulation. Examine tree regularization techniques, including regional tree regularization, and their application to real-world datasets like MIMIC III. Discover how to evaluate interpretability metrics and understand the caveats of regularizing for interpretability. Gain insights into the speaker's research on deep generative models and unsupervised learning algorithms, with applications in education and healthcare.

Optimizing for Interpretability in Deep Neural Networks - Mike Wu

Stanford University
Add to list
0:00 / 0:00