首页> 外文期刊>Statistics and computing >Achieving fairness with a simple ridge penalty
【24h】

Achieving fairness with a simple ridge penalty

机译:通过简单的山脊惩罚实现公平

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Abstract In this paper, we present a general framework for estimating regression models subject to a user-defined level of fairness. We enforce fairness as a model selection step in which we choose the value of a ridge penalty to control the effect of sensitive attributes. We then estimate the parameters of the model conditional on the chosen penalty value. Our proposal is mathematically simple, with a solution that is partly in closed form and produces estimates of the regression coefficients that are intuitive to interpret as a function of the level of fairness. Furthermore, it is easily extended to generalised linear models, kernelised regression models and other penalties, and it can accommodate multiple definitions of fairness. We compare our approach with the regression model from Komiyama et al. (in: Proceedings of machine learning research. 35th international conference on machine learning (ICML), vol 80, pp 2737–2746, 2018), which implements a provably optimal linear regression model and with the fair models from Zafar et al. (J Mach Learn Res 20:1–42, 2019). We evaluate these approaches empirically on six different data sets, and we find that our proposal provides better goodness of fit and better predictive accuracy for the same level of fairness. In addition, we highlight a source of bias in the original experimental evaluation in Komiyama et al. (in: Proceedings of machine learning research. 35th international conference on machine learning (ICML), vol 80, pp 2737–2746, 2018).
机译:摘要 本文提出了一个基于用户定义的公平性水平的回归模型估计的一般框架。我们将公平性作为模型选择步骤强制执行,在该步骤中,我们选择岭惩罚的值来控制敏感属性的影响。然后,我们根据所选的惩罚值来估计模型的参数。我们的建议在数学上很简单,其解决方案部分采用封闭形式,并产生回归系数的估计值,这些估计值可以直观地解释为公平水平的函数。此外,它很容易扩展到广义线性模型、核化回归模型和其他惩罚,并且可以适应公平的多种定义。我们将我们的方法与 Komiyama 等人的回归模型进行了比较(收录于:Proceedings of machine learning research. 35th international conference on machine learning (ICML),第 80 卷,第 2737–2746 页,2018 年),该模型实现了可证明的最优线性回归模型,并与 Zafar 等人的公平模型(J Mach Learn Res 20:1–42,2019 年)。我们在六个不同的数据集上实证评估了这些方法,我们发现我们的提案在相同水平的公平性下提供了更好的拟合优度和更好的预测准确性。此外,我们强调了 Komiyama 等人(在:机器学习研究论文集。第 35 届机器学习国际会议 (ICML),第 80 卷,第 2737–2746 页,2018 年)。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号