首页> 外文会议>International Conference on Computer Vision >Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once
【24h】

Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once

机译:一旦一个人:通过学习多目标对抗网络一次迈向多目标攻击一次

获取原文

摘要

Modern deep neural networks are often vulnerable to adversarial samples. Based on the first optimization-based attacking method, many following methods are proposed to improve the attacking performance and speed. Recently, generation-based methods have received much attention since they directly use feed-forward networks to generate the adversarial samples, which avoid the time-consuming iterative attacking procedure in optimization-based and gradient-based methods. However, current generation-based methods are only able to attack one specific target (category) within one model, thus making them not applicable to real classification systems that often have hundreds/thousands of categories. In this paper, we propose the first Multi-target Adversarial Network (MAN), which can generate multi-target adversarial samples with a single model. By incorporating the specified category information into the intermediate features, it can attack any category of the target classification model during runtime. Experiments show that the proposed MAN can produce stronger attack results and also have better transferability than previous state-of-the-art methods in both multi-target attack task and single-target attack task. We further use the adversarial samples generated by our MAN to improve the robustness of the classification model. It can also achieve better classification accuracy than other methods when attacked by various methods.
机译:现代深层神经网络往往容易受到对抗性样本的影响。基于基于第一优化的攻击方法,提出了许多以下方法来提高攻击性能和速度。最近,基于代的基于的方法受到了很大的关注,因为它们直接使用前馈网络来生成对抗性样本,这避免了基于优化和基于梯度的方法的耗时迭代攻击过程。然而,基于生成的基础方法只能在一个模型中攻击一个特定的目标(类别),从而使它们不适用于通常具有数百/数千类的真实分类系统。在本文中,我们提出了第一多目标对抗网络(人类),其可以产生具有单个模型的多目标对抗性样本。通过将指定的类别信息纳入中间功能,它可以在运行时攻击目标分类模型的任何类别。实验表明,拟议的人可以产生更强的攻击结果,并且还比以前的多目标攻击任务和单目标攻击任务中的最先进方法具有更好的可转换性。我们进一步使用我们人类产生的对抗性样本来改善分类模型的稳健性。它还可以通过各种方法攻击时实现比其他方法更好的分类准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号