【24h】

Debiasing Embeddings for Reduced Gender Bias in Text Classification

机译:为减少文本分类中的性别偏见而对嵌入进行去偏置

获取原文

摘要

(Bolukbasi et al., 2016) demonstrated that pre-trained word embeddings can inherit gender bias from the data they were trained on. We investigate how this bias affects downstream classification tasks, using the case study of occupation classification (De-Arteaga et al., 2019). We show that traditional techniques for debiasing embeddings can actually worsen the bias of the downstream classifier by providing a less noisy channel for communicating gender information. With a relatively minor adjustment, however, we show how these same techniques can be used to simultaneously reduce bias and maintain high classification accuracy.
机译:(Bolukbasi et al。,2016)证明了预训练词嵌入可以从他们训练的数据中继承性别偏见。我们使用职业分类的案例研究(De-Arteaga等,2019),研究这种偏见如何影响下游分类任务。我们显示,用于消除嵌入偏差的传统技术实际上可以通过提供较少的用于传达性别信息的嘈杂通道来加剧下游分类器的偏见。但是,通过较小的调整,我们将展示如何使用这些相同的技术来同时减少偏差并保持较高的分类精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号