首页> 外文期刊>Computers & Security >FineFool: A novel DNN object contour attack on image recognition based on the attention perturbation adversarial technique
【24h】

FineFool: A novel DNN object contour attack on image recognition based on the attention perturbation adversarial technique

机译:FineFool:基于注意扰动侵扰技术的图像识别新的DNN对象轮廓攻击

获取原文
获取原文并翻译 | 示例
       

摘要

Deep neural networks (DNNs) have vanous applications owing to their feature learning ability. However, recent studies have shown that DNNs are vulnerable to adversarial examples. Currently, research on the generation of adversarial examples primarily focuses on improving the attack success rate (ASR) while reducing the perturbation size. By visualizing of heat maps, previous works have found that the feature extraction effect of DNNs is owing to the precise location of object contours and the provision of the correct attention to those areas. Therefore, the perturbations in adversarial examples will weaken the location of object contours in deep hidden layers and reduce the attention scope of the object area, which will lead to successful attacks. Inspired by this observation, we propose FineFool, a novel adversarial attack based on the attention perturbation adversarial technique, which includes channel-spatial attention and pixel-spatial attention. The former reduces the area of concern using DNNs while the latter achieves the error location of the object contours. By using the attention perturbation adversarial technique to target positions that are more vulnerable in legitimate examples, FineFool achieves a higher ASR with fewer perturbations compared with that of state-of-the-art adversarial attacks. Extensive experiments are carried out on MNIST, CIFAR10, and ImageNet datasets against six models. The results show that FineFool can achieve the best performance compared with the six baselines. More specifically, the mean ASR values of untargeted/targeted attack are 99.23% and 98.26% for FineFool on all datasets, respectively, which is the highest under white-box attack situations.
机译:由于其特征学习能力,深度神经网络(DNN)具有不良应用。然而,最近的研究表明,DNN易于对抗性实例。目前,对对抗性示例的产生主要侧重于改善攻击成功率(ASR),同时降低扰动大小。通过可视化热图,先前的作品发现DNN的特征提取效果由于物体轮廓的精确位置以及对这些区域的正确关注提供了正确的关注。因此,对抗性示例的扰动将削弱深隐层的物体轮廓的位置,并降低物体区域的注意范围,这将导致成功的攻击。灵感来自这种观察,我们提出FineFool,一种基于注意扰动侵扰技术的新型对抗攻击,包括渠道 - 空间关注和像素 - 空间关注。前者减少了使用DNN的关注区域,而后者达到物体轮廓的错误位置。通过使用注意扰动越野技术对合法示例更脆弱的目标位置,与最先进的对抗攻击相比,FineFool达到了更高的扰动较少。广泛的实验是在MNIST,CIFAR10和Imagenet数据集上进行的,针对六种模型进行。结果表明,与六个基线相比,FineFool可以实现最佳性能。更具体地说,对于所有数据集的FineFool分别为99.23%和98.26%,分别为99.23%和98.26%,这是白盒攻击情况下的最高。

著录项

  • 来源
    《Computers & Security》 |2021年第5期|102220.1-102220.24|共24页
  • 作者单位

    Institute of Cyberspace Security Zhejiang University of Technology Hangzhou 310023 China College of Information Engineering Zhejiang University of Technology Hangzhou 310023 China;

    College of Information Engineering Zhejiang University of Technology Hangzhou 310023 China;

    College of Information Engineering Zhejiang University of Technology Hangzhou 310023 China;

    College of Information Engineering Zhejiang University of Technology Hangzhou 310023 China;

    College of Computer Science and Technology Zhejiang University Hangzhou 310007 China;

    Institute of Cyberspace Security Zhejiang University of Technology Hangzhou 310023 China;

    College of Computer Science and Technology Zhejiang University Hangzhou 310007 China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Adversarial attack; Deep learning; Attention perturbation adversarial; technique; Perturbation visualization; Targeted attack;

    机译:对抗攻击;深度学习;注意扰动逆势;技术;扰动可视化;有针对性的攻击;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号