首页> 外文会议>Theory of cryptography conference >Achieving Fair Treatment in Algorithmic Classification
【24h】

Achieving Fair Treatment in Algorithmic Classification

机译:在算法分类中实现公平对待

获取原文

摘要

Fairness in classification has become an increasingly relevant and controversial issue as computers replace humans in many of today's classification tasks. In particular, a subject of much recent debate is that of finding, and subsequently achieving, suitable definitions of fairness in an algorithmic context. In this work, following the work of Hardt et al. (NIPS'16), we consider and formalize the task of sanitizing an unfair classifier C into a classifier C' satisfying an approximate notion of "equalized odds" or fair treatment. Our main result shows how to take any (possibly unfair) classifier C over a finite outcome space, and transform it-by just perturbing the output of C-according to some distribution learned by just having black-box access to samples of labeled, and previously classified, data, to produce a classifier C' that satisfies fair treatment; we additionally show that our derived classifier is near-optimal in terms of accuracy. We also experimentally evaluate the performance of our method.
机译:随着计算机在当今的许多分类任务中取代人类,分类的公平性已成为越来越重要和有争议的问题。特别是,最近辩论的主题是在算法环境中找到并随后实现对公平性的适当定义。在这项工作中,继Hardt等人的工作之后。 (NIPS'16),我们考虑将不公平分类器C消毒成满足“均等赔率”或公平对待的近似概念的分类器C'并将其正式化。我们的主要结果显示了如何在有限的结果空间上获取任何(可能是不公平的)分类器C并对其进行转换-通过仅干扰C的输出-根据仅通过黑盒访问标记的样本而获得的某种分布,以及预先分类的数据,以产生满足公平待遇的分类器C';我们还显示,在准确性方面,我们得出的分类器接近最佳。我们还通过实验评估了我们方法的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号