首页> 外文期刊>LIPIcs : Leibniz International Proceedings in Informatics >Bias In, Bias Out? Evaluating the Folk Wisdom
【24h】

Bias In, Bias Out? Evaluating the Folk Wisdom

机译:偏见,偏见?评估民间智慧

获取原文
       

摘要

We evaluate the folk wisdom that algorithmic decision rules trained on data produced by biased human decision-makers necessarily reflect this bias. We consider a setting where training labels are only generated if a biased decision-maker takes a particular action, and so "biased" training data arise due to discriminatory selection into the training data. In our baseline model, the more biased the decision-maker is against a group, the more the algorithmic decision rule favors that group. We refer to this phenomenon as bias reversal. We then clarify the conditions that give rise to bias reversal. Whether a prediction algorithm reverses or inherits bias depends critically on how the decision-maker affects the training data as well as the label used in training. We illustrate our main theoretical results in a simulation study applied to the New York City Stop, Question and Frisk dataset.
机译:我们评估民间智慧,举行偏见的人类决策者产生的数据培训的算法决策规则必须反映这种偏见。我们考虑一个只在偏见的决策者采用特定行动时才能产生培训标签的设置,因此由于歧视性选择培训数据而出现“偏置”培训数据。在我们的基线模型中,决策者的偏置越偏向于一个组,该算法决策规则越多。我们将这种现象称为偏见逆转。然后,我们澄清了产生偏见逆转的条件。预测算法是否反转或继承偏差尺寸依赖于决策者如何影响培训数据以及培训中使用的标签。我们说明了我们在应用于纽约市停止,问题和快速数据集的模拟研究中的主要理论结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号