首页> 外文会议>International conference on cloud computing and security >Attack on Deep Steganalysis Neural Networks
【24h】

Attack on Deep Steganalysis Neural Networks

机译:对深度隐写分析神经网络的攻击

获取原文

摘要

Deep neural networks (DNN) have achieved state-of-art performance on image classification and pattern recognition in recent years, and also show its power on steganalysis field. But research revealed that the DNN can be easily fooled by adversarial examples generated by adding perturbation to input. Deep steganalysis neural networks have the same potential threat as well. In this paper we discuss and analysis two different attack methods and apply the methods in attacking on deep steganalysis neural networks. We defined the model and propose the concrete attack steps, the result shows that the two methods have 96.02% and 90.25% success ratio separately on the target DNN. Thus, the adversarial example attack is valid for deep steganalysis neural networks.
机译:近年来,深度神经网络(DNN)在图像分类和模式识别方面取得了最先进的性能,并且在隐写分析领域也展现出了强大的实力。但是研究表明,DNN可以很容易地被对输入加扰动而产生的对抗性例子所愚弄。深度隐写分析神经网络也具有相同的潜在威胁。在本文中,我们讨论和分析了两种不同的攻击方法,并将这些方法应用于深度隐写分析神经网络的攻击。我们定义了模型并提出了具体的攻击步骤,结果表明两种方法对目标DNN的成功率分别为96.02%和90.25%。因此,对抗性示例攻击对深度隐写分析神经网络有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号