首页> 外文会议>IEEE Conference on Computer Communications >Differentially-Private Deep Learning from an optimization Perspective
【24h】

Differentially-Private Deep Learning from an optimization Perspective

机译:优化视角下的差异私密深度学习

获取原文

摘要

With the amount of user data crowdsourced for data mining dramatically increasing, there is an urgent need to protect the privacy of individuals. Differential privacy mechanisms are conventionally adopted to add noise to the user data, so that an adversary is not able to gain any additional knowledge about individuals participating in the crowdsourcing, by inferring from the learned model. However, such protection is usually achieved with significantly degraded learning results. We have observed that the fundamental cause of this problem is that the relationship between model utility and data privacy is not accurately characterized, leading to privacy constraints that are overly strict. In this paper, we address this problem from an optimization perspective, and formulate the problem as one that minimizes the accuracy loss given a set of privacy constraints. We use sensitivity to describe the impact of perturbation noise to the model utility, and propose a new optimized additive noise mechanism that improves overall learning accuracy while conforming to individual privacy constraints. As a highlight of our privacy mechanism, it is highly robust in the high privacy regime (when ∈ → 0), and against any changes in the model structure and experimental settings.
机译:随着用户数据的数量众所周心地增加数据挖掘,迫切需要保护个人的隐私。传统上采用差异隐私机制向用户数据添加噪声,以便对手无法通过从学习模型推断出对参与众包的个人的任何额外知识。然而,通常通过显着降低的学习结果实现这种保护。我们观察到这个问题的根本原因是,模型实用程序与数据隐私之间的关系不准确地表征,导致过度严格的隐私约束。在本文中,我们从优化角度解决了这个问题,并将问题列为一个最小化了一组隐私约束的精度损失。我们使用敏感性来描述扰动噪声对模型实用程序的影响,并提出了一种新的优化添加剂噪声机制,可以提高整体学习精度,同时符合个人隐私约束。作为我们隐私机制的亮点,它在高隐私制度(当∈→0)中具有高度强大,以及模型结构和实验设置的任何变化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号