...
首页> 外文期刊>Open Computer Science >A Neuron Noise-Injection Technique for Privacy Preserving Deep Neural Networks
【24h】

A Neuron Noise-Injection Technique for Privacy Preserving Deep Neural Networks

机译:保留深层神经网络的隐私噪声注入技术

获取原文
   

获取外文期刊封面封底 >>

       

摘要

Data is the key to information mining that unveils hidden knowledge. The ability to revealed knowledge relies on the extractable features of a dataset and likewise the depth of the mining model. Conversely, several of these datasets embed sensitive information that can engender privacy violation and are subsequently used to build deep neural network (DNN) models. Recent approaches to enact privacy and protect data sensitivity in DNN models does decline accuracy, thus, giving rise to significant accuracy disparity between a non-private DNN and a privacy preserving DNN model. This accuracy gap is due to the enormous uncalculated noise flooding and the inability to quantify the right level of noise required to perturb distinct neurons in the DNN model, hence, a dent in accuracy. Consequently, this has hindered the use of privacy protected DNN models in real life applications. In this paper, we present a neuron noise-injection technique based on layer-wise buffered contribution ratio forwarding and ?-differential privacy technique to preserve privacy in a DNN model. We adapt a layer-wise relevance propagation technique to compute contribution ratio for each neuron in our network at the pre-training phase. Based on the proportion of each neuron’s contribution ratio, we generate a noise-tuple via the Laplace mechanism, and this helps to eliminate unwanted noise flooding. The noise-tuple is subsequently injected into the training network through its neurons to preserve privacy of the training dataset in a differentially private manner. Hence, each neuron receives right proportion of noise as estimated via contribution ratio, and as a result, unquantifiable noise that drops accuracy of privacy preserving DNN models is avoided. Extensive experiments were conducted based on three real-world datasets and their results show that our approach was able to narrow down the existing accuracy gap to a close proximity, as well outperforms the state-of-the-art approaches in this context.
机译:数据是信息挖掘的关键,即揭示隐藏知识。揭示知识的能力依赖于数据集的可提集功能,同样地挖掘挖掘模型的深度。相反,这些数据集中的几个嵌入了能够引导隐私违规的敏感信息,随后用于构建深度神经网络(DNN)模型。在DNN模型中颁布隐私和保护数据敏感性的最新方法确实拒绝准确性,从而产生了非私人DNN与保留DNN模型的隐私差异。这种精度差距是由于巨大的未加工噪声泛滥,无法量化DNN模型中扰乱不同神经元所需的正确噪声水平,因此,准确性的凹痕。因此,这阻碍了在现实生活中使用隐私受保护的DNN模型。在本文中,我们介绍了一种基于层面缓冲贡献比转发的神经元噪声注射技术,以及Δ的Δ-Δ-百分比隐私技术,以保护DNN模型中的隐私。我们适应层面相关性传播技术,在预训练阶段计算我们网络中每个神经元的贡献比。基于每个神经元贡献比的比例,我们通过拉普拉斯机制产生噪声元组,这有助于消除不需要的噪音泛滥。随后将噪声元组通过其神经元注入训练网络,以以差别私人方式保护训练数据集的隐私。因此,每个神经元通过贡献比率接收正确的噪声比例,结果避免了降低保留DNN模型的隐私精度的无关噪声。基于三个现实世界数据集进行了广泛的实验,其结果表明,我们的方法能够将现有的精度差距缩小到紧密接近,因此在这种情况下优于最先进的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号