Главная
Study mode:
on
1
Introduction
2
Outline
3
Stripe
4
Rules
5
Models
6
Decision Trees
7
Random Forest
8
Explanations
9
Intuition
10
Structure
11
Algorithm
12
Explanation
13
Elephant Trunk
14
Observation
15
Lime
16
AI Rationalisation
17
Frogger
18
Methow submodel interpretability
19
Human interpretability
20
Peter Norvig
21
Roger Sperry
22
Homo Deus
23
Algorithms
24
Explanations are harmful
25
Why explanations are important
26
Human compatible AI
27
Data protection regulation
28
Clarify our ethics
29
Conclusion
Description:
Explore state-of-the-art strategies for explaining black-box machine learning model decisions in this 42-minute Strange Loop Conference talk by Sam Ritchie. Delve into the challenges of interpreting complex algorithms and the importance of demanding plausible explanations for AI-driven decisions. Learn about various techniques for generating explanations, including decision trees, random forests, and LIME. Examine the parallels between AI rationalization and human decision-making processes, and discuss the ethical implications of relying on unexplainable AI systems. Understand the significance of model interpretability in maintaining human control over technological advancements, ensuring compliance with data protection regulations, and clarifying our ethical standards in an increasingly AI-driven world.

Just-So Stories for AI - Explaining Black-Box Predictions

Strange Loop Conference
Add to list