首页> 美国卫生研究院文献>other >Preserving differential privacy in convolutional deep belief networks
【2h】

Preserving differential privacy in convolutional deep belief networks

机译:在卷积深度信任网络中保留差异隐私

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users’ personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) under differential privacy. Our main idea of enforcing ϵ-differential privacy is to leverage the functional mechanism to perturb the energy-based objective functions of traditional CDBNs, rather than their results. One key contribution of this work is that we propose the use of Chebyshev expansion to derive the approximate polynomial representation of objective functions. Our theoretical analysis shows that we can further derive the sensitivity and error bounds of the approximate polynomial representation. As a result, preserving differential privacy in CDBNs is feasible. We applied our model in a health social network, i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for human behavior prediction, human behavior classification, and handwriting digit recognition tasks. Theoretical analysis and rigorous experimental evaluations show that the pCDBN is highly effective. It significantly outperforms existing solutions.
机译:当深度神经网络建立在用户的个人和高度敏感的数据(例如临床记录,用户个人资料,生物医学图像等)上时,医学和医疗保健领域的深度学习的显着发展提出了明显的隐私问题。但是,已经进行了有关在深度学习中保护隐私的研究。在本文中,我们专注于开发私有卷积深度置信网络(pCDBN),该网络本质上是差分隐私下的卷积深度置信网络(CDBN)。我们执行ϵ-差分隐私的主要思想是利用功能机制扰乱传统CDBN的基于能量的目标函数,而不是其结果。这项工作的关键贡献在于,我们建议使用Chebyshev展开来推导目标函数的近似多项式表示。我们的理论分析表明,我们可以进一步推导近似多项式表示的灵敏度和误差范围。结果,在CDBN中保留差异隐私是可行的。我们将模型应用于健康社交网络(即YesiWell数据)和手写数字数据集(即MNIST数据)中,用于人类行为预测,人类行为分类和手写数字识别任务。理论分析和严格的实验评估表明,pCDBN非常有效。它明显优于现有解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号