Fairness in Representation Learning A study in evaluation and mitigation of bias via subgroup
2
Fairness in Machine Learning
3
Fairness in Representations: DML
4
Overview: Fairness in Deep Metric Learning
5
Intuition: Fairness in DML
6
Defining Fairness in DML
7
Experimental Design
8
Empirical Results: Bias Propagates
9
Bias Mitigation: Considerations
10
Bias Mitigation: An Initial Solution (PARADE)
11
Empirical Results in PARADE
12
Comparison with Oversampling
13
Limitations to PARADE
14
Fairness Improvements in Representations
15
Thank you for listening!
16
PARtial Attribute DE-correlation (PARADE)
Description:
Explore fairness in representation learning through this insightful conference talk by Natalie Dullerud, an incoming PhD student at Stanford University. Delve into the evaluation and mitigation of bias in deep metric learning (DML), focusing on subgroup disparities. Examine the negative impact of imbalanced training data on minority subgroup performance in downstream tasks. Learn about the proposed fairness in non-balanced DML benchmark (finDML) and its three key properties: inter-class alignment, intra-class alignment, and uniformity. Discover how bias in DML representations propagates to common downstream classification tasks, even when training data is re-balanced. Understand the Partial Attribute De-correlation (PARADE) method, designed to reduce performance gaps between subgroups in both embedding space and downstream metrics. Gain insights into broader fairness metrics in representation learning and their potential applications across different domains.
Fairness in Representation Learning - Natalie Dullerud