首页> 外文会议>International Conference on Artificial Neural Networks >Physical Adversarial Attacks by Projecting Perturbations
【24h】

Physical Adversarial Attacks by Projecting Perturbations

机译:通过投射扰动进行物理对抗攻击

获取原文

摘要

Research on adversarial attacks analyses how to slightly manipulate patterns like images to make a classifier believe it recogises a pattern with a wrong label, although the correct label is obvious to humans. In traffic sign recognition, previous physical adversarial attacks were mainly based on stickers or graffity on the sign's surface. In this paper, we propose and experimentally verify a new threat model that projects perturbations onto street signs via projectors or simulated laser pointers. No physical manipulation is required, which makes the attack difficult to detect. Attacks via projection imply new constraints like exclusively increasing colour intensities or manipulating certain colour channels. As exemplary experiments, we fool neural networks to classify stop signs as priority signs only by projecting optimised perturbations onto original traffic signs.
机译:关于对抗性攻击的研究分析了如何稍微操纵图像等模式,以使分类器相信其识别出带有错误标签的模式,尽管正确的标签对人类而言是显而易见的。在交通标志识别中,以前的物理对抗攻击主要基于标志表面的粘贴或刮擦。在本文中,我们提出并通过实验验证了一种新的威胁模型,该模型可以通过投影仪或模拟激光指示器将扰动投影到路牌上。无需物理操作,这使得攻击难以检测。通过投影的攻击意味着新的限制,例如专门增加色彩强度或操纵某些色彩通道。作为示例性实验,我们仅通过将优化的扰动投影到原始交通标志上,就愚弄了神经网络将停车标志分类为优先标志。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号