Главная
Study mode:
on
1
Intro
2
REAL CONSEQUENCES
3
KNOW WHAT QUESTION YOU'RE ASKING UP FRONT
4
USE CONDITIONAL PROBABILITY OVER CORRELATION
5
MORTGAGE LENDING ANALYSIS
6
BUILD A BETTER DATA SET!
7
BEWARE SHADOW COLUMNS
8
MAKE SURE YOUR SAMPLE SET IS REPRESENTATIVE
9
KEEP IN MIND YOU NEED TO KNOW WHO CAN BE AFFECTED IN ORDER TO UN-BIAS
10
PRICING ALGORITHMS
11
WHAT IF AMAZON BUILT A SALARY TOOL INSTEAD?
12
THE BRATWURST PROBLEM
13
MORE COMPLEX ALGORITHMS THAT INCLUDE OUTSIDE INFLUENCE
14
VERIFY AND CHECK SOLUTIONS DERIVED FROM SIMULATION
15
MANY AI/ML TOOLS ARE TRAINED TO MINIMIZE AVERAGE LOSS
16
REPRESENTATION DISPARITY
17
DISTRIBUTIONALLY ROBUST OPTIMIZATION
18
WHAT HAPPENS TO PEOPLE WHO USE DIALECT?
19
PREDICTIVE POLICING
20
LOOK TO CONTROL ENGINEERING
21
ABIDE BY ETHICS GUIDELINES
22
Transparency of Use Transparency of Algorithms
Description:
Explore the hidden biases in AI and machine learning systems in this thought-provoking conference talk. Delve into how human cognitive biases can inadvertently seep into algorithmic decision-making processes, challenging the notion of AI's objectivity. Learn about real-world consequences of biased AI, from mortgage lending to predictive policing. Discover practical strategies to mitigate these biases, including the importance of asking the right questions, using conditional probability, building representative datasets, and implementing distributionally robust optimization. Examine case studies like Amazon's hypothetical salary tool and the "bratwurst problem" to understand the complexities of AI bias. Gain insights into ethical considerations, transparency in AI use and algorithms, and the role of control engineering in creating fairer AI systems. Walk away with a deeper understanding of how to develop more equitable and reliable AI and machine learning models that can truly benefit society. Read more

An AI with an Agenda - How Our Cognitive Biases Leak Into Machine Learning

NDC Conferences
Add to list
0:00 / 0:00