首页> 外文会议>International Conference on Cloud Computing and Security >Attack on Deep Steganalysis Neural Networks
【24h】

Attack on Deep Steganalysis Neural Networks

机译:攻击深层麻木分析神经网络

获取原文

摘要

Deep neural networks (DNN) have achieved state-of-art performance on image classification and pattern recognition in recent years, and also show its power on steganalysis field. But research revealed that the DNN can be easily fooled by adversarial examples generated by adding perturbation to input. Deep steganalysis neural networks have the same potential threat as well. In this paper we discuss and analysis two different attack methods and apply the methods in attacking on deep steganalysis neural networks. We defined the model and propose the concrete attack steps, the result shows that the two methods have 96.02% and 90.25% success ratio separately on the target DNN. Thus, the adversarial example attack is valid for deep steganalysis neural networks.
机译:近年来,深度神经网络(DNN)在图像分类和模式识别上取得了最新的性能,并且还展示了它在隐草场上的力量。但研究表明,DNN可以容易地被通过添加扰动生成的对抗示例来欺骗。深度沉淀神经网络也具有相同的潜在威胁。在本文中,我们讨论和分析了两种不同的攻击方法,并应用了对深层麻木分析神经网络攻击的方法。我们定义了该模型并提出了具体的攻击步骤,结果表明,两种方法在目标DNN上分别分别具有96.02%和90.25%的成功比率。因此,对抗性示例攻击对于深度隐性神经网络有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号