【24h】

Implicit Bias in Predictive Data Profiling Within Recruitments

机译:招聘中的预测数据分析中的隐式偏差

获取原文

摘要

Recruiters today are often using some kind of tool with data mining and profiling, as an initial screening for successful candidates. Their objective is often to become more objective and get away from human limitation, such as implicit biases versus underprivileged groups of people. In this explorative analysis there have been three potential problems identified, regarding the practice of using these predictive computer tools for hiring. First, that they might miss the best candidates, as the employed algorithms are tuned with limited and outdated data. Second, is the risk of directly or indirectly discriminate candidates, or, third, failure to give equal opportunities for all individuals. The problems are not new to us, and from this theoretical analysis and from other similar work; it seems that algorithms and predictive data mining tools have similar kinds of implicit biases as humans. Our human limitations, then, does not seem to be limited to us humans.
机译:如今,招聘人员经常将某种工具与数据挖掘和分析一起使用,作为对成功候选人的初步筛选。他们的目标通常是变得更加客观,摆脱人为的限制,例如内隐的偏见与弱势群体。在此探索性分析中,关于使用这些预测性计算机工具进行招聘的实践,发现了三个潜在问题。首先,由于采用的算法会使用有限且过时的数据进行调整,因此它们可能会错过最佳候选者。其次,是直接或间接歧视候选人的风险,或者第三,是无法为所有个人提供平等机会的风险。从我们的理论分析和其他类似的工作来看,这些问题对我们而言并不陌生。似乎算法和预测性数据挖掘工具具有与人类相似的隐式偏差。因此,我们的人类局限性似乎并不仅限于我们人类。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号