Главная
Study mode:
on
1
Introduction
2
Bad decisions
3
Definitions
4
Why
5
Article
6
Crossvalidation
7
Accuracy interpretability
8
Two types of machine learning problems
9
Critically ill patients
10
Two helps to be
11
Optimization problem
12
Saliency maps
13
Casebased reasoning
14
My network
15
Prototype layer
16
Redbellied woodpecker
17
Wilsons warbler
18
Accuracy vs interpretability
19
Computeraided mammography
20
Interpretable AI
21
Case Study
22
Results
23
Two Layer Additive Risk Model
24
Submission to Special Issue
25
Paper
26
Problems
27
Most powerful argument
28
Papers
29
Interactive tool
30
Demo
31
Optimization
32
Machine Learning in Medicine
Description:
Explore the critical importance of using interpretable machine learning models for high-stakes decisions in this thought-provoking lecture by Cynthia Rudin from Duke University. Delve into the potential societal consequences of relying on black box models and their unreliable explanations. Discover the advantages of interpretable models, which provide faithful explanations of their computations. Examine real-world examples in seizure prediction for ICU patients and digital mammography. Learn about different types of machine learning problems, optimization techniques, and the balance between accuracy and interpretability. Gain insights into case-based reasoning, prototype layers, and the application of interpretable AI in medicine. Engage with interactive tools and demos that showcase the power of interpretable machine learning in addressing critical healthcare challenges.

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead - Cynthia Rudin

Institute for Advanced Study
Add to list
0:00 / 0:00