首页> 外文会议>German Conference on Pattern Recognition >A Randomized Gradient-Free Attack on ReLU Networks
【24h】

A Randomized Gradient-Free Attack on ReLU Networks

机译:ReLU网络上的随机无梯度攻击

获取原文

摘要

It has recently been shown that neural networks but also other classifiers are vulnerable to so called adversarial attacks e.g. in object recognition an almost non-perceivable change of the image changes the decision of the classifier. Relatively fast heuristics have been proposed to produce these adversarial inputs but the problem of finding the optimal adversarial input, that is with the minimal change of the input, is NP-hard. While methods based on mixed-integer optimization which find the optimal adversarial input have been developed, they do not scale to large networks. Currently, the attack scheme proposed by Carlini and Wagner is considered to produce the best adversarial inputs. In this paper we propose a new attack scheme for the class of ReLU networks based on a direct optimization on the resulting linear regions. In our experimental validation we improve in all except one experiment out of 18 over the Carlini-Wagner attack with a relative improvement of up to 9%. As our approach is based on the geometrical structure of ReLU networks, it is less susceptible to defences targeting their functional properties.
机译:最近显示,神经网络以及其他分类器也容易受到所谓的对抗攻击,例如攻击。在物体识别中,图像的几乎不可感知的变化会改变分类器的决策。已经提出了相对快速的启发式方法来产生这些对抗性输入,但是找到最佳对抗性输入的问题,即输入的最小变化,是NP难的。虽然已经开发了基于混合整数优化的方法,该方法可以找到最佳对抗输入,但它们并不能扩展到大型网络。目前,人们认为Carlini和Wagner提出的攻击方案可产生最佳的对抗性输入。在本文中,我们基于对所得线性区域的直接优化,针对ReLU网络类别提出了一种新的攻击方案。在我们的实验验证中,除了18个实验中的一个实验,我们比Carlini-Wagner攻击的所有实验都有改善,相对改善高达9%。由于我们的方法基于ReLU网络的几何结构,因此不太容易针对其功能属性进行防御。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号