首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function
【24h】

Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function

机译:利用性别均等损失函数减少单词级语言模型中的性别偏见

获取原文

摘要

Gender bias exists in natural language datasets which neural language models tend to learn, resulting in biased text generation. In this research, we propose a debiasing approach based on the loss function modification. We introduce a new term to the loss function which attempts to equalize the probabilities of male and female words in the output. Using an array of bias evaluation metrics, we provide empirical evidence that our approach successfully mitigates gender bias in language models without increasing perplexity by much. In comparison to existing debiasing strategies, data augmentation, and word embedding de-biasing, our method performs better in several aspects, especially in reducing gender bias in occupation words. Finally, we introduce a combination of data augmentation and our approach, and show that it outperforms existing strategies in all bias evaluation metrics.
机译:性别偏见存在于神经语言模型倾向于学习的自然语言数据集中,从而导致文本产生偏见。在这项研究中,我们提出了一种基于损失函数修正的去偏方法。我们为损失函数引入了一个新术语,该函数试图使输出中的男性和女性单词的概率相等。通过使用一系列偏见评估指标,我们提供了经验证据,表明我们的方法成功地缓解了语言模型中的性别偏见,而又没有增加太多的困惑。与现有的去偏策略,数据增强和词嵌入去偏相比,我们的方法在多个方面表现更好,尤其是在减少职业词中的性别偏见方面。最后,我们结合了数据扩充和我们的方法,并证明它在所有偏差评估指标上均优于现有策略。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号