首页> 外文会议>International Conference on Information Security and Cryptology >Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural Network
【24h】

Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural Network

机译:朋友安全的对抗例子在深度神经网络上的逃避攻击中

获取原文

摘要

Deep neural networks (DNNs) perform effectively in machine learning tasks such as image recognition, intrusion detection, and pattern analysis. Recently proposed adversarial examples - slightly modified data that lead to incorrect classification - are a severe threat to the security of DNNs. However, in some situations, adversarial examples might be useful, i.e., for deceiving an enemy classifier on a battlefield. In that case, friendly classifiers should not be deceived. In this paper, we propose adversarial examples that are friend-safe, which means that friendly machines can classify the adversarial example correctly. To make such examples, the transformation is carried out to minimize the friend's wrong classification and the adversary's correct classification. We suggest two configurations of the scheme: targeted and untargeted class attacks. In experiments using the MNIST dataset, the proposed method shows a 100% attack success rate and 100% friendly accuracy with little distortion (2.18 and 1.53 for each configuration, respectively). Finally, we propose a mixed battlefield application and a new covert channel scheme.
机译:深度神经网络(DNN)有效地在机器学习任务中执行,例如图像识别,入侵检测和图案分析。最近提出的对抗示例 - 略微修改的数据导致不正确的分类 - 对DNN的安全性是一个严重的威胁。然而,在某些情况下,对抗性示例可能是有用的,即欺骗战场上的敌方分类器。在这种情况下,不应欺骗友好的分类器。在本文中,我们提出了与朋友安全的对抗的例子,这意味着友好的机器可以正确地分类对抗示例。为了做出这样的例子,进行了转变,以最大限度地减少朋友的错误分类和对手的正确分类。我们建议两个方案配置:有针对性的和未确定的课程攻击。在使用MNIST DataSet的实验中,所提出的方法显示了100%攻击成功率和100%友好友好精度,分别具有很小的失真(2.18和1.53,每个配置分别为2.18和1.53)。最后,我们提出了一个混合的战场应用和新的隐蔽渠道方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号