首页> 外文期刊>Neurocomputing >Adversarial attacks on Faster R-CNN object detector
【24h】

Adversarial attacks on Faster R-CNN object detector

机译:对Faster R-CNN对象检测器的对抗攻击

获取原文
获取原文并翻译 | 示例

摘要

Adversarial attacks have stimulated research interests in the field of deep learning security. However, most of existing adversarial attack methods are developed on classification. In this paper, we use Projected Gradient Descent (PGD), the strongest first-order attack method on classification, to produce adversarial examples on the total loss of Faster R-CNN object detector. Compared with the state-of-the-art Dense Adversary Generation (DAG) method, our attack is more efficient and more powerful in both white-box and black-box attack settings, and is applicable in a variety of neural network architectures. On Pascal VOC2007, under white-box attack, DAG has 5.92% mAP on Faster R-CNN with VGG16 backbone using 41.42 iterations on average, while our method achieves 0.90% using only 4 iterations. We also analyze the difference of attacks between classification and detection, and find that in addition to misclassification, adversarial examples on detection also lead to mis-localization. Besides, we validate the adversarial effectiveness of both Region Proposal Network (RPN) and Fast R-CNN loss, the components of the total loss. Our research will provide inspiration for further efforts in adversarial attacks on other vision tasks. (C) 2019 Elsevier B.V. All rights reserved.
机译:对抗性攻击激发了深度学习安全领域的研究兴趣。但是,大多数现有的对抗攻击方法都是基于分类的。在本文中,我们使用分类上最强的一阶攻击方法投影梯度下降(PGD)来生成关于Faster R-CNN对象检测器总损耗的对抗性示例。与最先进的密集对手生成(DAG)方法相比,我们的攻击在白盒和黑盒攻击设置中均更加高效和强大,并且适用于各种神经网络体系结构。在Pascal VOC2007上,在白盒攻击下,DAG在具有VGG16主干的Faster R-CNN上具有5.92%的mAP,平均使用41.42次迭代,而我们的方法仅使用4次迭代即可达到0.90%。我们还分析了分类和检测之间的攻击差异,发现除了分类错误外,检测中的对抗性示例还会导致错误定位。此外,我们验证了区域投标网络(RPN)和快速R-CNN损失(总损失的组成部分)的对抗效果。我们的研究将为进一步打击其他视觉任务的攻击提供灵感。 (C)2019 Elsevier B.V.保留所有权利。

著录项

  • 来源
    《Neurocomputing》 |2020年第21期|87-95|共9页
  • 作者

  • 作者单位

    Chinese Acad Sci Inst Automat State Key Lab Management & Control Complex Syst Beijing 100190 Peoples R China|Univ Chinese Acad Sci Beijing Peoples R China;

    Beijing Univ Chem Technol Coll Informat Sci & Technol Beijing 100029 Peoples R China;

    Peking Univ Sch Math Sci Beijing 100871 Peoples R China;

    Chinese Acad Sci Inst Automat State Key Lab Management & Control Complex Syst Beijing 100190 Peoples R China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Adversarial attack; Object detection; White-box attack; Black-box attack;

    机译:对抗攻击;对象检测;白盒攻击;黑匣子攻击;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号