Motivation: Machine Learning in High-Stakes Applications
3
How to identify/explain sources of disparity in machine learning models?
4
Outline
5
Popular Definition: Statistical Parity
6
Conditional Dependence can sometimes falsely detect bias (misleading dependencies) even when a model is "causally" fair Example: Causally fair model
7
One causal measure that satisfies all desirable properties Theorem: Our proposed measure of non-exempt disparity, given by
8
Some intuition on our proposed measure from causality
9
Non-negative decomposition of total "causal" disparity Theorem 2 (pictorially illustrated)
10
Simulation: Four types of disparities present
11
Numerical Computation of Fundamental Limits on the Tradeoff 1.4
12
Reliable Machine Learning
13
Partial Information Decomposition + Causality
Description:
Explore the intersection of algorithmic fairness, causality, and information theory in this 37-minute lecture by Sanghamitra Dutta from JP Morgan AI Research. Delve into the complexities of identifying and explaining sources of disparity in machine learning models, particularly in high-stakes applications. Learn about a systematic measure of "non-exempt disparity" that combines concepts from information theory and causality. Discover how to quantify accuracy-fairness trade-offs using Chernoff Information. Gain insights into the challenges of resolving legal disputes and informing policies related to algorithmic bias, including the importance of distinguishing between disparities arising from occupational necessities versus other factors. Examine case studies, theorems, and simulations that illustrate these concepts, and understand the application of Partial Information Decomposition and causality in addressing fairness issues in AI.
Algorithmic Fairness From The Lens Of Causality And Information Theory