首页> 外文会议>IEEE International Conference on Data Mining >Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning
【24h】

Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

机译:自适应拉普拉斯机制:深度学习中的差异性隐私保护

获取原文

摘要

In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds "more noise" into features which are "less relevant" to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions.
机译:在本文中,我们致力于开发一种新颖的机制来保护深度神经网络中的差异隐私,从而:(1)隐私预算的消耗完全与训练步骤的数量无关; (2)具有根据每个特征对输出的贡献将噪声自适应地注入特​​征的能力; (3)可以应用于各种不同的深度神经网络。为实现此目的,我们找到了一种扰动神经元的仿射变换以及在深度神经网络中使用的损失函数的方法。此外,我们的机制有意在与模型输出“不太相关”的特征中添加了“更多噪声”,反之亦然。我们的理论分析进一步推导了我们机制的敏感性和误差范围。在MNIST和CIFAR-10数据集上进行的严格实验表明,我们的机制非常有效,并且优于现有解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号