首页> 外文会议>IEEE Symposium on Security and Privacy >Differentially Private Model Publishing for Deep Learning
【24h】

Differentially Private Model Publishing for Deep Learning

机译:深度学习的差异化私有模型发布

获取原文

摘要

Deep learning techniques based on neural networks have shown significant success in a wide range of AI tasks. Large-scale training datasets are one of the critical factors for their success. However, when the training datasets are crowdsourced from individuals and contain sensitive information, the model parameters may encode private information and bear the risks of privacy leakage. The recent growing trend of the sharing and publishing of pre-trained models further aggravates such privacy risks. To tackle this problem, we propose a differentially private approach for training neural networks. Our approach includes several new techniques for optimizing both privacy loss and model accuracy. We employ a generalization of differential privacy called concentrated differential privacy(CDP), with both a formal and refined privacy loss analysis on two different data batching methods. We implement a dynamic privacy budget allocator over the course of training to improve model accuracy. Extensive experiments demonstrate that our approach effectively improves privacy loss accounting, training efficiency and model quality under a given privacy budget.
机译:基于神经网络的深度学习技术已在各种AI任务中取得了巨大的成功。大规模训练数据集是其成功的关键因素之一。但是,当训练数据集是从个人众包并包含敏感信息时,模型参数可能会编码私人信息并承担隐私泄露的风险。共享和发布预训练模型的近期增长趋势进一步加剧了此类隐私风险。为了解决这个问题,我们提出了一种差分私有方法来训练神经网络。我们的方法包括多种用于优化隐私丢失和模型准确性的新技术。我们采用称为集中差分隐私(CDP)的差分隐私的概括,对两种不同的数据批处理方法进行了正式和精细的隐私丢失分析。我们在培训过程中实施了动态隐私预算分配器,以提高模型的准确性。大量的实验表明,在给定的隐私预算下,我们的方法有效地改善了隐私损失的核算,培训效率和模型质量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号