首页> 外文期刊>Data mining and knowledge discovery >Matching code and law: achieving algorithmic fairness with optimal transport
【24h】

Matching code and law: achieving algorithmic fairness with optimal transport

机译:匹配守则:以最佳运输实现算法公平性

获取原文
获取原文并翻译 | 示例
           

摘要

Increasingly, discrimination by algorithms is perceived as a societal and legal problem. As a response, a number of criteria for implementing algorithmic fairness in machine learning have been developed in the literature. This paper proposes the continuous fairness algorithm (CFA theta)which enables a continuous interpolation between different fairness definitions. More specifically, we make three main contributions to the existing literature. First, our approach allows the decision maker to continuously vary between specific concepts of individual and group fairness. As a consequence, the algorithm enables the decision maker to adopt intermediate "worldviews" on the degree of discrimination encoded in algorithmic processes, adding nuance to the extreme cases of "we're all equal" and "what you see is what you get" proposed so far in the literature. Second, we use optimal transport theory, and specifically the concept of the barycenter, to maximize decision maker utility under the chosen fairness constraints. Third, the algorithm is able to handle cases of intersectionality, i.e., of multi-dimensional discrimination of certain groups on grounds of several criteria. We discuss three main examples (credit applications; college admissions; insurance contracts) and map out the legal and policy implications of our approach. The explicit formalization of the trade-off between individual and group fairness allows this post-processing approach to be tailored to different situational contexts in which one or the other fairness criterion may take precedence. Finally, we evaluate our model experimentally.
机译:越来越多地,通过算法的歧视被认为是一个社会和法律问题。作为一种响应,在文献中开发了在机器学习中实施算法公平性的许多标准。本文提出了连续的公平算法(CFAθ),其能够在不同公平定义之间连续内插。更具体地说,我们对现有文献进行了三项主要贡献。首先,我们的方法允许决策者在个人和小组公平的具体概念之间不断变化。因此,该算法使决策者能够在算法过程中编码的歧视程度,为“我们都是平等”的极端情况增加了细微差异,并“你所看到的是你得到的东西”。到目前为止在文献中提出。其次,我们使用最优运输理论,特别是重构的概念,最大化所选公平约束下的决策者效用。第三,该算法能够处理若干标准的地面某些组的交叉关系的情况。我们讨论三个主要例子(信用申请;大学招生;保险合同),并绘制我们方法的法律和政策影响。个人和小组公平之间的权衡的明确形式化允许这种后处理方法根据不同的情境范围定制,其中一个或其他公平标准可能采取优先权。最后,我们通过实验评估我们的模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号