首页> 外文会议>International Conference on Artificial Neural Networks >HLR: Generating Adversarial Examples by High-Level Representations
【24h】

HLR: Generating Adversarial Examples by High-Level Representations

机译:HLR:通过高级表示生成对抗示例

获取原文

摘要

Neural networks can be fooled by adversarial examples. Recently, many methods have been proposed to generate adversarial examples, but these works mainly concentrate on the pixel-wise information, which limits the transferability of adversarial examples. Different from these methods, we introduce perceptual module to extract the high-level representations and change the manifold of the adversarial examples. Besides, we propose a novel network structure to replace the generative adversarial network (GAN). The improved structure ensures high similarity of adversarial examples and promotes the stability of training process. Extensive experiments demonstrate that our method has significant improvement on the transferability. Furthermore, the adversarial training defence method is invalid for our attack.
机译:神经网络可以被对抗例子所欺骗。最近,已经提出了许多方法来产生对抗性示例,但这些作品主要集中在像素方面的信息上,这限制了对抗性示例的可转移性。与这些方法不同,我们介绍了感知模块以提取高级表示并改变对抗示例的歧管。此外,我们提出了一种新颖的网络结构来取代生成的对抗性网络(GaN)。改进的结构确保了对抗性实例的高相似性,并促进了训练过程的稳定性。广泛的实验表明,我们的方法对可转移性具有显着改善。此外,对我们的攻击进行对抗性训练防御方法无效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号