首页> 外文会议>IEEE/CIC International Conference on Communications in China >FA: A Fast Method to Attack Real-time Object Detection Systems
【24h】

FA: A Fast Method to Attack Real-time Object Detection Systems

机译:FA:一种攻击实时对象检测系统的快速方法

获取原文

摘要

With the development of deep learning, image and video processing plays an important role in the age of 5G communication. However, deep neural networks are vulnerable: subtle perturbations can lead to incorrect classification results. Nowadays, adversarial attacks on artificial intelligence models have seen increasing interest. In this study, we propose a new method named FA to generate adversarial examples of object detection models. Based on the generative adversarial network (GAN), we combine the classification and location information to make the generated image look as real as possible. Experimental results on the PASCAL VOC dataset show that our method efficiently and quickly generates the image. Then, we test the transferability of adversarial samples on different datasets and object detection models such as YOLOv4, which also achieve certain transfer performance. Our work provides a basis for further exploring the defects of deep learning and improving the robustness of the systems.
机译:随着深度学习的发展,图像和视频处理在5G通信时代起着重要作用。但是,深度神经网络很容易受到攻击:细微的扰动可能导致错误的分类结果。如今,对人工智能模型的对抗性攻击越来越引起人们的关注。在这项研究中,我们提出了一种称为FA的新方法来生成对象检测模型的对抗示例。基于生成对抗网络(GAN),我们结合了分类和位置信息,以使生成的图像看起来尽可能真实。在PASCAL VOC数据集上的实验结果表明,我们的方法可以高效,快速地生成图像。然后,我们测试对抗性样本在不同数据集和对象检测模型(例如YOLOv4)上的可传递性,这也可以实现一定的传递性能。我们的工作为进一步探索深度学习的缺陷和提高系统的鲁棒性提供了基础。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号