Главная
Study mode:
on
1
Intro
2
Interpretability in different stages of Al evolution
3
Approaches for visual explanations
4
Visualize any decision
5
Visualizing Image Captioning models
6
Visualizing Visual Question Answering models
7
Analyzing Failure modes
8
Grad-CAM for predicting patient outcomes
9
Extensions to Multi-modal Transformer based Architectures
10
Desirable properties of Visual Explanations
11
Equalizer
12
Biases in Vision and Language models
13
Human Importance-aware Network Tuning (HINT)
14
Contrastive Self-Supervised Learning (SSL)
15
Why SSL methods fail to generalize to arbitrary images?
16
Does improved SSL grounding transfer to downstream tasks?
17
CAST makes models resilient to background changes
18
VQA for visually impaired users
19
Sub-Question Importance-aware Network Tuning
20
Explaining Model Decisions and Fixing them via Human Feedback
21
Grad-CAM for multi-modal transformers
Description:
Explore the intricacies of explaining and improving AI model decisions through human feedback in this 58-minute conference talk by Ramprasaath Selvaraju, a Sr. Machine Learning Scientist at Artera. Delve into algorithms that provide explanations for deep network decisions, focusing on building user trust, incorporating domain knowledge, learning grounded representations, and correcting unwanted biases in AI models. Gain insights into visual explanations, interpretability in AI evolution, and applications in image captioning, visual question answering, and medical AI. Examine topics such as Grad-CAM, multi-modal transformer architectures, contrastive self-supervised learning, and techniques for making models resilient to background changes. Learn about innovative approaches like Human Importance-aware Network Tuning (HINT) and Sub-Question Importance-aware Network Tuning for improving AI performance and addressing biases in vision and language models.

Explaining Model Decisions and Fixing Them Through Human Feedback

Stanford University
Add to list
0:00 / 0:00