Главная
Study mode:
on
1
Introduction
2
NLP Systems
3
Allocational Harm
4
Stereotyping
5
Bias in human annotation
6
Bias detection techniques
7
Word embedding association test
8
Null hypothesis
9
Word embeddings
10
Sentence embeddings
11
Error rates
12
Difference by city
13
Language disparities
14
Counterfactual evaluation
15
Mitigating biases
16
Feature and variant rep representations
17
Bias sentence embeddings
18
Soft devices
19
Data augmentation
20
Augmentation with humans
21
Bias research
Description:
Explore the critical topic of bias and fairness in Natural Language Processing (NLP) through this comprehensive lecture from CMU's Advanced NLP course. Delve into various types of bias present in NLP models and learn effective strategies for bias prevention. Examine allocational harm, stereotyping, and biases in human annotation. Discover bias detection techniques, including word embedding association tests and null hypothesis testing. Analyze bias in word and sentence embeddings, error rates, and language disparities across different cities. Investigate counterfactual evaluation methods and explore mitigation strategies such as feature representation, bias sentence embeddings, and data augmentation techniques. Gain valuable insights into the current landscape of bias research in NLP and its implications for developing fair and equitable language models.

CMU Advanced NLP: Bias and Fairness

Graham Neubig
Add to list
00:00
-03:54