首页> 外文期刊>Journal of Information Policy >How Algorithms Discriminate Based on Data they Lack: Challenges, Solutions, and Policy Implications on JSTOR
【24h】

How Algorithms Discriminate Based on Data they Lack: Challenges, Solutions, and Policy Implications on JSTOR

机译:算法如何根据缺乏的数据进行区分:JSTOR面临的挑战,解决方案和政策含义

获取原文
       

摘要

Organizations often employ data-driven models to inform decisions that can have a significant impact on people's lives (e.g., university admissions, hiring). In order to protect people's privacy and prevent discrimination, these decision-makers may choose to delete or avoid collecting social category data, like sex and race. In this article, we argue that such censoring can exacerbate discrimination by making biases more difficult to detect. We begin by detailing how computerized decisions can lead to biases in the absence of social category data and in some contexts, may even sustain biases that arise by random chance. We then show how proactively using social category data can help illuminate and combat discriminatory practices, using cases from education and employment that lead to strategies for detecting and preventing discrimination. We conclude that discrimination can occur in any sociotechnical system in which someone decides to use an algorithmic process to inform decision-making, and we offer a set of broader implications for researchers and policymakers.
机译:组织经常采用数据驱动的模型来告知可能对人们的生活产生重大影响的决策(例如,大学录取,招聘)。为了保护人们的隐私并防止歧视,这些决策者可以选择删除或避免收集社会性别数据,例如性别和种族。在本文中,我们认为,这种审查会通过使偏见更难以发现而加剧歧视。我们首先详细说明在缺乏社会类别数据的情况下,计算机化决策如何导致偏见,在某些情况下甚至可能维持由随机机会引起的偏见。然后,我们将展示如何积极利用社会类别数据,利用教育和就业中的案例来发现和防止歧视的策略,从而帮助阐明和打击歧视性做法。我们得出的结论是,歧视可能会在任何社会技术系统中发生,有人决定使用算法过程来指导决策,并且我们为研究人员和决策者提供了一系列更广泛的含义。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号