首页> 外文期刊>Computers & Security >Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier
【24h】

Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier

机译:朋友安全的回避攻击:一个友善的分类器正确识别的对抗示例

获取原文
获取原文并翻译 | 示例
           

摘要

Deep neural networks (DNNs) have been applied in several useful services, such as image recognition, intrusion detection, and pattern analysis of machine learning tasks. Recently proposed adversarial examples-slightly modified data that lead to incorrect classification are a severe threat to the security of DNNs. In some situations, however, an adversarial example might be useful, such as when deceiving an enemy classifier on the battlefield. In such a scenario, it is necessary that a friendly classifier not be deceived. In this paper, we propose a friend-safe adversarial example, meaning that the friendly machine can classify the adversarial example correctly. To produce such examples, a transformation is carried out to minimize the probability of incorrect classification by the friend and that of correct classification by the adversary. We suggest two configurations for the scheme: targeted and untargeted class attacks. We performed experiments with this scheme using the MNIST and CIFAR10 datasets. Our proposed method shows a 100% attack success rate and 100% friend accuracy with only a small distortion: 2.18 and 1.54 for the two respective MNIST configurations, and 49.02 and 27.61 for the two respective CIFAR10 configurations. Additionally, we propose a new covert channel scheme and a mixed battlefield application for consideration in further applications. (C) 2018 Elsevier Ltd. All rights reserved.
机译:深度神经网络(DNN)已应用于多种有用的服务中,例如图像识别,入侵检测和机器学习任务的模式分析。最近提出的对抗性示例-导致分类错误的轻微修改数据严重威胁了DNN的安全。但是,在某些情况下,例如在战场上欺骗敌人分类器时,对抗性示例可能会有用。在这种情况下,有必要不要欺骗友好的分类器。在本文中,我们提出了一个朋友安全的对抗示例,这意味着友好的机器可以正确地对对抗示例进行分类。为了产生这样的示例,进行了转换以最小化由朋友错误分类和由对手正确分类的可能性。我们建议该方案的两种配置:有针对性的和无针对性的类攻击。我们使用MNIST和CIFAR10数据集对该方案进行了实验。我们提出的方法显示出100%的攻击成功率和100%的朋友准确度,并且失真很小:两个MNIST配置分别为2.18和1.54,两个CIFAR10配置分别为49.02和27.61。此外,我们提出了一种新的隐蔽通道方案和混合战场应用程序,供以后的应用程序考虑。 (C)2018 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号