首页> 外文期刊>Neurocomputing >Randomized learning and generalization of fair and private classifiers: From PAC-Bayes to stability and differential privacy
【24h】

Randomized learning and generalization of fair and private classifiers: From PAC-Bayes to stability and differential privacy

机译:公平和私人分类机的随机学习和概括:从PAC-BAYES到稳定性和差异隐私

获取原文
获取原文并翻译 | 示例

摘要

We address the problem of randomized learning and generalization of fair and private classifiers. From one side we want to ensure that sensitive information does not unfairly influence the outcome of a classifier. From the other side we have to learn from data while preserving the privacy of individual observations. We initially face this issue in the PAC-Bayes framework presenting an approach which trades off and bounds the risk and the fairness of the randomized (Gibbs) classifier. Our new approach is able to handle several different state-of-the-art fairness measures. For this purpose, we further develop the idea that the PAC-Bayes prior can be defined based on the data-generating distribution without actually knowing it. In particular, we define a prior and a posterior which give more weight to functions with good generalization and fairness properties. Furthermore, we will show that this randomized classifier possesses interesting stability properties using the algorithmic distribution stability theory. Finally, we will show that the new posterior can be exploited to define a randomized accurate and fair algorithm. Differential privacy theory will allow us to derive that the latter algorithm has interesting privacy preserving properties ensuring our threefold goal of good generalization, fairness, and privacy of the final model. (C) 2020 Elsevier B.V. All rights reserved.
机译:我们解决了公平和私人分类器的随机学习和泛化问题。从一方面,我们希望确保敏感信息不会影响分类器的结果。从另一边,我们必须从数据中学习,同时保留个人观察的隐私。我们最初在PAC-BAYES框架中展示了这个问题,呈现出一种折断的方法,并涉及随机(GIBBS)分类器的风险和公平性。我们的新方法能够处理几种不同的最先进的公平措施。为此目的,我们进一步开发了PAC-Bayes之前可以根据数据生成分布来定义Pac-Bayes,而无需实际知道它。特别地,我们定义了先前和后后部,其具有更大的重量,以具有良好的泛化和公平性质。此外,我们将表明,该随机分类器使用算法分布稳定性理论具有有趣的稳定性。最后,我们将表明可以利用新的后路来定义随机准确和公平算法。差分隐私理论将使我们推导出后一种算法具有有趣的隐私保存属性,确保我们最终模型的良好普遍性,公平性和隐私的三倍目标。 (c)2020 Elsevier B.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号