...
首页> 外文期刊>Expert systems with applications >Improving fairness of artificial intelligence algorithms in Privileged-Group Selection Bias data settings
【24h】

Improving fairness of artificial intelligence algorithms in Privileged-Group Selection Bias data settings

机译:在特权组选择偏置数据设置中提高人工智能算法的公平性

获取原文
获取原文并翻译 | 示例

摘要

An increasing number of decisions regarding the daily lives of human beings are being controlled by artificial intelligence (AI) algorithms. Since they now touch on many aspects of our lives, it is crucial to develop AI algorithms that are not only accurate but also objective and fair. Recent studies have shown that algorithmic decision-making may be inherently prone to unfairness, even when there is no intention for it. In this paper, we study the fairness of AI algorithms in data settings for which unprivileged groups are extremely underrepresented compared to privileged groups. A typical domain which often presents such Privileged Group Selection Bias (PGSB) is AI-based hiring, which stems from an inherent lack of labeled information for rejected applicants. We first demonstrate that such a selection bias can lead to a high algorithmic bias, even if privileged and unprivileged groups are treated exactly the same. We then propose several methods to overcome this type of bias. In particular, we suggest three in-process and pre-process fairness mechanisms, combined with both supervised and semi-supervised learning algorithms. An extensive evaluation that was conducted using two real world datasets, reveals that the proposed methods are able to improve fairness considerably, with only a minimal compromise in accuracy. This is despite the limited information available for unprivileged groups and the inherent trade-off between fairness and accuracy.
机译:关于人类日常生活的越来越多的决定由人工智能(AI)算法控制。由于他们现在触及我们生活的许多方面,因此开发AI算法不仅准确,而且客观公平是至关重要的。最近的研究表明,即使没有意图它,算法决策也可能是自然的,这可能是不公平的。在本文中,我们研究了与特权组相比,在哪些数据设置中的AI算法中的公平性是非常不足的。通常呈现此类特权组选择偏差(PGSB)的典型域是基于AI的招聘,其源于拒绝申请人的固有缺乏标记信息。我们首先证明这种选择偏差可以导致高算法偏差,即使特权和未经特权的组完全相同。然后,我们提出了几种方法来克服这种类型的偏差。特别是,我们建议三个过程和预处理的公平机制,结合监督和半监督的学习算法。使用两个现实世界数据集进行的广泛评估揭示了所提出的方法能够显着地改善公平性,只有最小妥协的准确性。尽管提供了有限的信息,但是公平和准确性之间存在有限的信息。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号