Главная
Study mode:
on
1
Intro
2
How do we identify bias in algorithmic decisions?
3
Case study: Pre-trial decision making
4
Problems with the benchmark test
5
The outcome test in Broward County
6
Risk distributions
7
The problem with the outcome test
8
The problem of infra-marginality
9
Identifying bias in human decisions
10
Making decisions with algorithms
11
Evidence from Broward County
12
Potential fairness concerns
13
Redlining
14
Why is calibration insufficient?
15
Sample bias
16
Label bias
17
Subgroup validity
18
Use of protected characteristics
19
Statistical parity as a measure of fairness
20
Where do these disparities come from?
21
The optimal rule is a single threshold
22
The fairness/fairness trade-off
23
Analogies to tests for discrimination
24
The problem with false positive rates
25
Making fair decisions with algorithms
26
Limitations
Description:
Explore algorithmic decision-making and fairness in this Simons Institute symposium talk. Delve into the challenges of identifying bias in algorithmic decisions, focusing on a case study of pre-trial decision-making. Examine the limitations of benchmark tests and outcome tests, and understand the concept of infra-marginality. Investigate how to identify bias in human decisions and compare it to algorithmic decision-making. Analyze evidence from Broward County and discuss potential fairness concerns, including redlining and the insufficiency of calibration. Learn about sample bias, label bias, and subgroup validity. Evaluate the use of protected characteristics and statistical parity as measures of fairness. Understand the optimal rule for decision-making and the trade-offs between different fairness criteria. Draw analogies to tests for discrimination and explore the limitations of false positive rates. Gain insights into making fair decisions with algorithms and recognize the limitations of current approaches. Read more

Algorithmic Decision Making and the Cost of Fairness

Simons Institute
Add to list