首页> 外文会议>IEEE Global Communications Conference >Using Adversarial Noises to Protect Privacy in Deep Learning Era
【24h】

Using Adversarial Noises to Protect Privacy in Deep Learning Era

机译:在深度学习时代使用对抗性噪音保护隐私

获取原文

摘要

The unprecedented accuracy of deep learning methods has earned themselves as the foundation of new AI-based services on the Internet. At the same time, it presents obvious privacy issues. The deep learning aided privacy attack can dig out sensitive personal information not only from the text but also from unstructured data such as images and videos. In this paper, we proposed a framework to protect image privacy against the deep learning tools. We also propose two new metrics to measure the image privacy. Moreover, we propose two different image privacy protection schemes based on the two metrics, utilizing the adversarial example idea. The performance of our schemes is validated by simulation on a large-scale dataset. Our study shows that we can protect the image privacy by adding a small amount of noise, while the added noise has a humanly imperceptible impact on the image quality.
机译:深度学习方法的前所未有的准确性使自己成为互联网上新的AI服务的基础。与此同时,它提出了明显的隐私问题。深度学习辅助隐私攻击不仅可以从文本中挖掘敏感的个人信息,而且可以从诸如图像和视频等非结构化数据中挖掘敏感的个人信息。在本文中,我们提出了一种保护图像隐私对深入学习工具的框架。我们还提出了两个新的指标来衡量图像隐私。此外,我们提出了基于两个指标的两种不同的图像隐私保护方案,利用了对抗示例的想法。我们的方案的性能通过在大规模数据集上进行仿真来验证。我们的研究表明,我们可以通过增加少量噪声来保护图像隐私,而添加的噪声对图像质量具有人类难以察觉的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号