首页> 外文会议>International conference on decision and game theory for security >MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
【24h】

MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense

机译:MTDeep:通过移动目标防御,提高深神经网络的安全性对抗对抗攻击

获取原文

摘要

Present attack methods can make state-of-the-art classification systems based on deep neural networks mis-classify every adver-sarially modified test example. The design of general defense strategies against a wide range of such attacks still remains a challenging problem. In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for 'boosting' the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks. To classify an input image at test time, a constituent network is randomly selected based on a mixed policy. To obtain this policy, we formulate the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg Game (BSG). We empirically show that our approach MTDeep, reduces misclassification on perturbed images for various datasets such as MNIST, FashionMNIST, and ImageNet while maintaining high classification accuracy on legitimate test images. We then demonstrate that our framework, being the first meta-defense technique, can be used in conjunction with any existing defense mechanism to provide more resilience against adversarial attacks that can be afforded by these defense mechanisms alone. Lastly, to quantify the increase in robustness of an ensemble-based classification system when we use MTDeep, we analyze the properties of a set of DNNs and introduce the concept of differential immunity that formalizes the notion of attack transferability.
机译:目前的攻击方法可以基于深度神经网络的最先进的分类系统进行错误分类每个Adver-Sarly修改的测试示例。普通防御战略的设计仍仍然是一个具有挑战性的问题。在本文中,我们从网络安全和多智能体系统的领域中汲取灵感,并提出利用在“助推”深层神经网络的整体的坚固性设计荟萃的防守移动目标防御(MTD)的概念( DNNS)用于对这种对抗攻击的视觉分类任务。为了在测试时间进行分类输入图像,基于混合策略随机选择组成网络。为了获得此策略,我们制定后卫(托管分类网络)之间的互动及其(合法和恶意)用户作为贝叶斯堆栈(BSG)。我们经验证明我们的方法MTDeep,减少了对诸如Mnist,FashionMnist和ImageNet等各种数据集的扰动图像的错误分类,同时保持合法测试图像的高分类准确性。然后,我们证明我们的框架是第一批元防御技术,可以与任何现有的防御机制一起使用,以便为这些防御机制独立提供更多的防御性攻击。最后,为了量化基于合奏的分类系统的稳健性的增加,我们使用MTDeep时,我们分析了一组DNN的性质并介绍了差异抗扰度的概念,该差异攻击攻击转移性的概念。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号