首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks
【24h】

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

机译:揭秘者:针对深层神经网络的生成模型反转攻击

获取原文

摘要

This paper studies model-inversion attacks, in which the access to a model is abused to infer information about the training data. Since its first introduction by~cite{fredrikson2014privacy}, such attacks have raised serious concerns given that training data usually contain privacy sensitive information. Thus far, successful model-inversion attacks have only been demonstrated on simple models, such as linear regression and logistic regression. Previous attempts to invert neural networks, even the ones with simple architectures, have failed to produce convincing results. Here we present a novel attack method, termed the emph{generative model-inversion attack}, which can invert deep neural networks with high success rates. Rather than reconstructing private training data from scratch, we leverage partial public information, which can be very generic, to learn a distributional prior via generative adversarial networks (GANs) and use it to guide the inversion process. Moreover, we theoretically prove that a model's predictive power and its vulnerability to inversion attacks are indeed two sides of the same coin---highly predictive models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount the attacks. Our extensive experiments demonstrate that the proposed attack improves identification accuracy over the existing work by about $75%$ for reconstructing face images from a state-of-the-art face recognition classifier. We also show that differential privacy, in its canonical form, is of little avail to defend against our attacks.
机译:本文研究模型反转攻击,其中滥用对模型的访问来推断有关培训数据的信息。自〜\ Cite {Fredrikson2014Privacy}首次介绍以来,鉴于培训数据通常包含隐私敏感信息,此类攻击提出了严重的担忧。到目前为止,只有在简单的模型上才能证明成功的模型反转攻击,例如线性回归和逻辑回归。以前尝试反转神经网络,即使是具有简单架构的架构,也无法产生令人信服的结果。在这里,我们提出了一种新的攻击方法,称为\ emph {生成模型反转攻击},其可以使用高成功率反转深度神经网络。我们不是从划痕重建私人培训数据,而是利用部分公共信息,这可以是非常通用的,以通过生成的对抗网络(GAN)来学习分布,并使用它来指导反演过程。此外,我们理论上证明了模型的预测力及其对反转攻击的漏洞确实是同一硬币的两侧 - 高度预测模型能够建立特征和标签之间的强烈相关性,这与对手漏洞完全一致安装攻击。我们广泛的实验表明,该攻击通过最先进的面部识别分类器重建面部图像,提高了现有工作的识别准确性约为75℃。我们还表明,在其规范形式中,差异隐私有点不用抵御我们的攻击。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号