首页> 外文会议>International Conference on Artificial Neural Networks >Physical Adversarial Attacks by Projecting Perturbations
【24h】

Physical Adversarial Attacks by Projecting Perturbations

机译:通过投射扰动进行身体对抗攻击

获取原文

摘要

Research on adversarial attacks analyses how to slightly manipulate patterns like images to make a classifier believe it recogises a pattern with a wrong label, although the correct label is obvious to humans. In traffic sign recognition, previous physical adversarial attacks were mainly based on stickers or graffity on the sign's surface. In this paper, we propose and experimentally verify a new threat model that projects perturbations onto street signs via projectors or simulated laser pointers. No physical manipulation is required, which makes the attack difficult to detect. Attacks via projection imply new constraints like exclusively increasing colour intensities or manipulating certain colour channels. As exemplary experiments, we fool neural networks to classify stop signs as priority signs only by projecting optimised perturbations onto original traffic signs.
机译:对抗性攻击的研究分析了如何略微操纵图像,如图像,使分类器相信它认识到错误标签的模式,尽管正确的标签对人类显而易见。在交通标志识别中,以前的身体对抗攻击主要基于标志表面上的贴纸或涂鸦。在本文中,我们建议并通过投影仪或模拟激光指示器将扰动投入探测器的新威胁模型。不需要物理操作,这使得攻击难以检测。通过投影攻击意味着新的约束,例如只增加色彩强度或操纵某些颜色通道。作为示例性实验,我们唯一通过将优化的扰动投影到原始交通标志上,才能将停止标志分类为优先符号。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号