首页> 外文会议>IEEE International Conference on Trust, Security and Privacy in Computing and Communications >Differential Privacy Preservation in Interpretable Feedforward-Designed Convolutional Neural Networks
【24h】

Differential Privacy Preservation in Interpretable Feedforward-Designed Convolutional Neural Networks

机译:可解释的馈电设计卷积神经网络中的差分隐私保存

获取原文

摘要

Feedforward-designed convolutional neural network (FF-CNN) is an interpretable network. The parameter training of the model does not require backpropagation (BP) and optimization algorithms (SGD). The entire network is based on the statistical data output by the previous layer, and the parameters of the current layer are obtained through one-pass manner. Since the network complexity under the FF design is lower than the BP algorithm, FF-CNN has better utility than the BP training method in the directions of semi-supervised learning, ensemble learning, and continuous subspace learning. However, the FFCNN training process or model release will cause the privacy of training data to be leaked. In this paper, we analyze and verify that the attacker can obtain the private information of the original training data after mastering the training parameters of FF-CNN and the partial output responses. Therefore, the privacy protection of training data is imperative. However, due to the particularity of the FF-CNN training method, the existing deep learning privacy protection technology is not applicable. So we proposed an algorithm called di erential privacy subspace approximation with adjusted bias (DPSaab) to protect the training data in FF-CNN. According to the di erent contribution of the model filters to the output response, we design the privacy budget allocation according to the ratio of the eigenvalues and allocate a larger privacy budget to the filter with a large contribution, and vice versa. Extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10 datasets show that DPSaab algorithm has better utility than existing privacy protection technologies.
机译:馈电设计的卷积神经网络(FF-CNN)是可解释的网络。该模型的参数培训不需要BackProjagation(BP)和优化算法(SGD)。整个网络基于上一层输出的统计数据,并且通过一次通过方式获得电流层的参数。由于FF设计下的网络复杂性低于BP算法,因此FF-CNN在半监督学习,集合学习和连续子空间学习方向上具有比BP培训方法更好的效用。但是,FFCNN培训过程或模型版本将导致培训数据的隐私泄露。在本文中,我们分析并验证攻击者在掌握FF-CNN的训练参数和部分输出响应后可以获得原始训练数据的私人信息。因此,培训数据的隐私保护是必要的。但是,由于FF-CNN训练方法的特殊性,现有的深度学习隐私保护技术不适用。因此,我们提出了一种具有调整后偏置(DPSAAB)的DI绝系隐私子空间近似的算法,以保护FF-CNN中的训练数据。根据模型过滤器对输出响应的DI ERENT贡献,我们根据特征值的比率设计隐私预算分配,并将更大的隐私预算分配给过滤器,具有大的贡献,反之亦然。对Mnist,Fashion-Mnist和CiFar-10数据集的广泛实验表明,DPSAAB算法与现有的隐私保护技术具有更好的效用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号