Главная
Study mode:
on
1
Intro
2
Outline
3
Motivation
4
Introduction
5
Key Contributions
6
Study Details
7
Unifying Visual Explanation Methods Across Input Domains
8
Saliency map
9
Scoped Rules (Anchors)
10
SHAPISHapley Additive exPlanations
11
A Unified Representation of Visual Explanation Frameworks
12
Superimposition Based Explanation Methods
13
Training Data Based Explanation Methods
14
Study Methodology
15
Validating Responses
16
Tasks & Datasets
17
Models and Explanations
18
Configuring and Optimizing Explanation Methods
19
Results
20
Usability and Stability of Explanations
21
Idealized vs Actualized Explanations - Superimposition Methods
22
Explanation-by-Example
23
Privacy Risks
24
Conclusion
25
Against
Description:
Explore deep neural network explanation methods in this 37-minute lecture from the University of Central Florida's CAP6412 course. Delve into key contributions and study details, examining various visual explanation techniques such as saliency maps, scoped rules (Anchors), and SHAP. Investigate a unified representation framework for visual explanations, covering superimposition-based and training data-based methods. Learn about the study methodology, including task design, dataset selection, and model explanations. Analyze results on usability, stability, and privacy risks associated with different explanation approaches. Gain insights into the challenges of explaining complex neural networks and the implications for AI transparency and interpretability.

How Can I Explain This to You? An Empirical Study of Deep Neural Net Explanation Methods - Spring 2021

University of Central Florida
Add to list
0:00 / 0:00