...
首页> 外文期刊>Neurocomputing >Adversarial patch attacks against aerial imagery object detectors
【24h】

Adversarial patch attacks against aerial imagery object detectors

机译:Adversarial patch attacks against aerial imagery object detectors

获取原文
获取原文并翻译 | 示例
           

摘要

Although Deep Neural Networks (DNNs)-based object detectors are widely used in various fields, espe-cially on aerial imagery object detections, it has been observed that a small elaborately designed patch attached to the images can mislead the DNNs-based detectors into producing erroneous output. However, the target detectors being attacked are quite simple, and the attack efficiency is relatively low in previous works, making it not practicable in real scenarios. To address these limitations, a new adversarial patch attack algorithm is proposed in this paper. Firstly, we designed a novel loss function using the intermediate outputs of the models rather than the model's final outputs interpreted by the detection head to optimize adversarial patches. The experiments conducted on the DOTA, RSOD, and NWPU VHR-10 datasets demonstrate that our method can significantly degrade the performance of the detectors. Secondly, we conducted intensive experiments to investigate the impact of different out-puts of the detection model on generating adversarial patches, demonstrating the class score is not as effective as the objectness score. Thirdly, we comprehensively analyzed the attack transferability across different aerial imagery datasets, verifying that the patches generated on one dataset are also effective in attacking another. Moreover, we proposed ensemble training to boost the attack's transferability across models. Our work alarms the application of DNNs-based object detectors in aerial imagery.(c) 2023 Elsevier B.V. All rights reserved.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号