首页> 外文会议>IEEE International Conference on Data Engineering >iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
【24h】

iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

机译:ifair:学习单独公平的算法决策数据表示

获取原文

摘要

People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: giving adequate success rates to specifically protected groups. In contrast, the alternative paradigm of individual fairness has received relatively little attention, and this paper advances this less explored direction. The paper introduces a method for probabilistically mapping user records into a low-rank representation that reconciles individual fairness and the utility of classifiers and rankings in downstream applications. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on a variety of real-world datasets. Our experiments show substantial improvements over the best prior work for this setting.
机译:人们在越来越多的应用程序中升级和排名,朝着算法决策,通常基于机器学习。研究如何将公平纳入此类任务,采取了小组公平的范例:为特异性保护的团体提供足够的成功率。相比之下,个人公平的替代范式得到了相对较少的关注,本文推进了这一较少探索的方向。本文介绍了一种概率映射用户记录到低秩表示中的方法,该表格与下游应用中的分类器和排名的单独公平和效用进行调和。我们对个人公平的概念要求在所有任务相关的属性中相似的用户,例如工作资格,并忽视各种可能歧视的属性,如性别,应该具有类似的结果。我们通过将其应用于各种现实世界数据集的分类和学习对排列任务来证明我们的方法的多功能性。我们的实验表明,在此设置的最佳工作中显示出大量改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号