首页> 外文会议>International Conference on Multimedia Modeling >An Effective Way to Boost Black-Box Adversarial Attack
【24h】

An Effective Way to Boost Black-Box Adversarial Attack

机译:一种提升黑匣子逆势攻击的有效方法

获取原文

摘要

Deep neural networks (DNNs) are vulnerable to adversarial examples. Generally speaking adversarial examples are defined by adding input samples a small-magnitude perturbation, which is hardly misleading human observers' decision but would lead to misclassifications for a well trained models. Most of existing iterative adversarial attack methods suffer from low success rates in fooling model in a black-box manner. And we find that it is because perturbation neutralize each other in iterative process. To address this issue, we propose a novel boosted iterative method to effectively promote success rates. We conduct the experiments on ImageNet dataset, with five models normally trained for classification. The experimental results show that our proposed strategy can significantly improve success rates of fooling models in a black-box manner. Furthermore, it also outperforms the momentum iterative method (MI-FSGM), which won the first places in NeurIPS Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
机译:深层神经网络(DNNs)也可能遭受敌对的例子。一般来说对抗性的例子是通过将输入样本小幅度的扰动,这是很难误导人类观察者的决定,但会导致错误分类为一个训练有素的模型定义。大多数现有的迭代敌对攻击方法苦于在嘴硬模型成功率很低的黑盒的方式。我们发现,这是因为扰动相互抵消的迭代过程。为了解决这个问题,我们提出了一个新颖的提振迭代方法,有效地促进成功率。我们对ImageNet数据集进行的实验中,共有5种型号,通常的培训进行分类。实验结果表明,该策略可以显著提高嘴硬车型的成功率在黑盒的方式。此外,它也优于动量迭代法(MI-FSGM),从而赢得在NeurIPS非靶向对抗性攻击和有针对性的对抗式攻击比赛的第一个地方。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号