首页> 外文会议>IEEE Conference on Computer Communications >Differentially-Private Deep Learning from an optimization Perspective
【24h】

Differentially-Private Deep Learning from an optimization Perspective

机译:从优化角度看差异化私有深度学习

获取原文

摘要

With the amount of user data crowdsourced for data mining dramatically increasing, there is an urgent need to protect the privacy of individuals. Differential privacy mechanisms are conventionally adopted to add noise to the user data, so that an adversary is not able to gain any additional knowledge about individuals participating in the crowdsourcing, by inferring from the learned model. However, such protection is usually achieved with significantly degraded learning results. We have observed that the fundamental cause of this problem is that the relationship between model utility and data privacy is not accurately characterized, leading to privacy constraints that are overly strict. In this paper, we address this problem from an optimization perspective, and formulate the problem as one that minimizes the accuracy loss given a set of privacy constraints. We use sensitivity to describe the impact of perturbation noise to the model utility, and propose a new optimized additive noise mechanism that improves overall learning accuracy while conforming to individual privacy constraints. As a highlight of our privacy mechanism, it is highly robust in the high privacy regime (when ∈ → 0), and against any changes in the model structure and experimental settings.
机译:随着为数据挖掘众包的用户数据量急剧增加,迫切需要保护个人隐私。常规地采用差异隐私机制来向用户数据添加噪声,从而使得对手无法通过从学习的模型推断而获得有关参与众包的个人的任何其他知识。但是,这样的保护通常在学习效果大大降低的情况下实现。我们已经观察到,此问题的根本原因是模型实用程序与数据隐私之间的关系未得到准确表征,从而导致隐私约束过于严格。在本文中,我们从优化角度解决了该问题,并将其表述为在给定隐私约束的情况下最小化准确性损失的问题。我们使用灵敏度来描述扰动噪声对模型实用程序的影响,并提出了一种新的优化加性噪声机制,该机制可以提高总体学习的准确性,同时符合个人隐私约束。作为我们隐私机制的一大亮点,它在高隐私机制下(当∈→0时)非常健壮,并且可以抵抗模型结构和实验设置的任何变化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号