Главная
Study mode:
on
1
Introduction
2
Talk Structure
3
Why is this required
4
Trust Issues
5
Expression
6
Visualisation
7
Expansion Units
8
Knowledge Base Linking
9
Counterfeiture estimations
10
Characteristics of an explanation model
11
Output of an explanation model
12
Algorithm Line 1
13
Multiclass transmission
14
Globalist connections
15
Complex reasoning tasks
16
Vanguard problems
17
Attention weights
18
Informational tasks
19
Complex models
20
Information interval
21
Sakaya 2019
22
Sampling
23
Output
24
Weights of Atoms
25
Noise Contrast to Estimation
26
Equation
27
Societal Implications
28
Conclusion
29
Questions
30
Local Fidelity
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore a comprehensive lecture on explainable AI and its societal impact delivered by Dr. Debasis Ganguly from the University of Glasgow. Delve into the paradigm shift from feature-driven to data-driven AI learning, examining how modern AI systems process information differently from human perception. Master key explanation methodologies including LIME, L2X, and Shapley algorithms while understanding their practical applications in search systems. Learn about the critical role of explainable AI in developing fair and trustworthy next-generation systems through topics like multiclass transmission, complex reasoning tasks, and attention weights. Discover how knowledge base linking, counterfeiture estimations, and noise contrast estimation contribute to building more transparent AI systems. Gain insights into the characteristics and outputs of explanation models, information intervals, and local fidelity concepts that shape the future of AI development. Perfect for computer engineering and data science professionals seeking to understand the intersection of AI transparency and societal responsibility. Read more

A Journey Towards Explainable AI and Its Societal Implications - From Trust Issues to Complex Models

IIIA Hub
Add to list
0:00 / 0:00