首页> 外文会议>IEEE International Conference on Data Engineering >iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
【24h】

iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

机译:iFair:学习用于算法决策的个人公平数据表示

获取原文

摘要

People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: giving adequate success rates to specifically protected groups. In contrast, the alternative paradigm of individual fairness has received relatively little attention, and this paper advances this less explored direction. The paper introduces a method for probabilistically mapping user records into a low-rank representation that reconciles individual fairness and the utility of classifiers and rankings in downstream applications. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on a variety of real-world datasets. Our experiments show substantial improvements over the best prior work for this setting.
机译:人们通常在基于机器学习的情况下,对越来越多的应用程序中的算法决策进行评级和排名。关于如何将公平纳入此类任务的研究普遍采用了群体公平的范式:为受特殊保护的群体提供足够的成功率。相比之下,个人公平的替代范式受到的关注相对较少,本文提出了这一探索较少的方向。本文介绍了一种将用户记录概率映射到低排名表示形式的方法,该方法可以调和个人公平性以及下游应用程序中分类器和排名的实用性。我们关于个人公平的概念要求,在所有与任务相关的属性(例如工作资格)方面相似并且不考虑诸如性别之类的所有潜在歧视属性的用户,应具有相似的结果。通过将其应用于各种现实数据集上的分类和等级学习任务,我们证明了该方法的多功能性。我们的实验显示,与该设置的最佳现有工作相比,有了很大的改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号