首页> 外文会议>International Conference on Intelligent Systems and Knowledge Engineering >Generative Networks for Adversarial Examples with Weighted Perturbations
【24h】

Generative Networks for Adversarial Examples with Weighted Perturbations

机译:带有加权扰动的对抗性例子的生成网络

获取原文

摘要

In adversarial deep learning, adversarial examples are used to attack the target models, such as deep neural networks (DNNs). The adversarial examples are always constructed by adding vicious perturbations to the original images, and the added perturbations are not easily perceived by human. In order to achieve high attack success rates of adversarial sample, existing attack strategies always increase the magnitudes of global perturbations. However, as the magnitudes are increasing, the attacks will be easily perceived by human. To address this problem, a new approach is proposed to generate adversarial examples with weighted perturbations and abbreviates as WP-AdvGAN. The weights distribution to perturbations are determined by the sensitivity of the target model to the corresponding local region in the adversarial examples. Local region with high sensitivity have greater impacts on the decision of the target model, so the perturbation weights for these regions will be increased in the adversarial sample, vice versa. Experiments performed on MNIST dataset demonstrate that the adversarial examples generated by the proposed WP-AdvGAN are similar to the original images and have high attack success rates under both white-box and black-box attack settings.
机译:在对抗式深度学习中,对抗性示例用于攻击目标模型,例如深度神经网络(DNN)。对抗示例总是通过在原始图像上添加恶意扰动来构造的,并且所添加的扰动不容易为人类所感知。为了获得对抗样本的高攻击成功率,现有的攻击策略总是增加全局扰动的幅度。但是,随着数量的增加,攻击将很容易被人察觉。为了解决这个问题,提出了一种新的方法来生成具有加权扰动并缩写为WP-AdvGAN的对抗性示例。在对抗性示例中,扰动的权重分布由目标模型对相应局部区域的敏感性确定。具有高灵敏度的局部区域对目标模型的决策影响更大,因此在对抗性样本中这些区域的摄动权重将增加,反之亦然。在MNIST数据集上进行的实验表明,所提出的WP-AdvGAN生成的对抗示例与原始图像相似,并且在白盒和黑盒攻击设置下均具有很高的攻击成功率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号