Главная
Study mode:
on
1
Talks - Angana Borah: Approaches to Fairness and Bias Mitigation in Natural Language Processing
Description:
Explore approaches to fairness and bias mitigation in Natural Language Processing in this 31-minute PyCon US talk. Delve into the importance of evaluating fairness and mitigating biases in large pre-trained language models like GPT and BERT, which are widely used in natural language understanding and generation applications. Understand how these models, trained on human-generated data from the web, can inherit and amplify human biases. Discover various methods for detecting and mitigating biases, and learn about available tools to incorporate into your models to ensure fairness. Gain valuable insights into the critical field of fairness and bias research in NLP, essential for developing more equitable and responsible AI systems.

Approaches to Fairness and Bias Mitigation in Natural Language Processing

PyCon US
Add to list