首页> 外文会议>International Conference on Discovery Science >Differentially Private Empirical Risk Minimization with Input Perturbation
【24h】

Differentially Private Empirical Risk Minimization with Input Perturbation

机译:输入扰动的差异私有经验风险最小化

获取原文

摘要

We propose a novel framework for the differentially private ERM, input perturbation. Existing differentially private ERM implicitly assumed that the data contributors submit their private data to a data-base expecting that the database invokes a differentially private mechanism for publication of the learned model. In input perturbation, each data contributor independently randomizes her/his data by itself and submits the perturbed data to the database. We show that the input perturbation framework theoretically guarantees that the model learned with the randomized data eventually satisfies differential privacy with the prescribed privacy parameters. At the same time, input perturbation guarantees that local differential privacy is guaranteed to the server. We also show that the excess risk bound of the model learned with input perturbation is O(1/n) under a certain condition, where n is the sample size. This is the same as the excess risk bound of the state-of-the-art.
机译:我们为差异私有E​​RM提出了一种新颖的框架,输入扰动。现有的差异私有环境管理局域网假设数据贡献者将其私有数据提交给预期数据库调用差别私有机制以发布学习模型的数据库。在输入扰动中,每个数据贡献者独立地将她/他的数据自身随机化,并将扰动数据提交到数据库。我们表明输入扰动框架理论上保证了使用随机数据学习的模型最终满足规定的隐私参数的差异隐私。同时,输入扰动保证了对服务器保证的本地差异隐私。我们还表明,在一定条件下,使用输入扰动学习的模型的过度风险是O(1 / N),其中n是样本大小。这与最先进的风险过多的风险相同。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号