...
首页> 外文期刊>Pattern Recognition: The Journal of the Pattern Recognition Society >Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation
【24h】

Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation

机译:用自适应面向对象的对抗扰动欺骗深度神经检测网络

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Deep learning has shown superiority in dealing with complicated and professional tasks (e.g., computer vision, audio, and language processing). However, research works have confirmed that Deep Neural Networks (DNNs) are vulnerable to carefully crafted adversarial perturbations, which cause DNNs confusion on specific tasks. In object detection domain, the background has little contributions to object classification, and the crafted adversarial perturbations added to the background do not improve the adversary effect in fooling deep neural detection models yet induce substantial distortions in generated examples. Based on such situation, we introduce an adversarial attack algorithm named Adaptive Object-oriented Adversarial Method (AO(2)AM). It aims to fool deep neural object detection networks with the adversarial examples by applying the adaptive cumulation of object-based gradients and adding the adaptive object-based adversarial perturbations merely onto objects rather than the whole frame of input images. AO(2) AM can effectively make the representations of generated adversarial samples close to the decision boundary in the latent space, and force deep neural detection networks to yield inaccurate locations and false classification in the process of object detection. Compared with existing adversarial attack methods which generate adversarial perturbations acting on the global scale of the original inputs, the adversarial examples produced by AO(2) AM can effectively fool deep neural object detection networks and maintain a high structural similarity with corresponding clean inputs. Performing adversarial attacks on Faster R-CNN, AO(2)AM gains attack success rate (ASR) over 98.00% on pre-processed Pascal VOC 2007&2012 (Val), and reaches SSIM over 0.870. In Fooling SSD, AO(2)AM receives SSIM exceeding 0.980 on L-2 norm constraint. On SSIM and Mean Attack Ratio, AO(2) AM outperforms adversarial attack methods based on global scale perturbations. (C) 2021 Elsevier Ltd. All rights reserved.
机译:深度学习在处理复杂的专业任务(如计算机视觉、音频和语言处理)方面显示出优越性。然而,研究工作已经证实,深层神经网络(DNN)容易受到精心设计的对抗性干扰的影响,这会导致DNN在特定任务上产生混淆。在目标检测领域,背景对目标分类的贡献微乎其微,加在背景上的精心设计的对抗性扰动不会改善欺骗深层神经检测模型的对抗性效果,但会在生成的示例中导致严重的失真。基于这种情况,本文提出了一种自适应面向对象对抗方法(AO(2)AM)。它的目的是通过应用基于对象的梯度的自适应累积,并将基于对象的自适应对抗扰动仅仅添加到对象上,而不是整个输入图像帧上,从而用对抗性示例愚弄深层神经目标检测网络。AO(2)AM可以有效地使生成的对抗性样本在潜在空间中的表示接近决策边界,并迫使深度神经检测网络在目标检测过程中产生不准确的位置和错误分类。与现有的对抗性攻击方法相比,AO(2)AM生成的对抗性示例能够有效地愚弄深层神经目标检测网络,并与相应的干净输入保持高度的结构相似性。AO(2)AM对速度更快的R-CNN进行对抗性攻击,在预处理的Pascal VOC 2007和2012(Val)上,攻击成功率(ASR)超过98.00%,达到SSIM超过0.870。在愚弄SSD时,AO(2)AM在L-2范数约束下接收超过0.980的SSIM。在SSIM和平均攻击率方面,AO(2)AM优于基于全局扰动的对抗攻击方法。(c)2021爱思唯尔有限公司保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号