首页> 外文会议>International conference on decision and game theory for security >MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
【24h】

MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense

机译:MTDeep:通过移动目标防御提高深层神经网络抵抗对抗攻击的安全性

获取原文

摘要

Present attack methods can make state-of-the-art classification systems based on deep neural networks mis-classify every adver-sarially modified test example. The design of general defense strategies against a wide range of such attacks still remains a challenging problem. In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for 'boosting' the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks. To classify an input image at test time, a constituent network is randomly selected based on a mixed policy. To obtain this policy, we formulate the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg Game (BSG). We empirically show that our approach MTDeep, reduces misclassification on perturbed images for various datasets such as MNIST, FashionMNIST, and ImageNet while maintaining high classification accuracy on legitimate test images. We then demonstrate that our framework, being the first meta-defense technique, can be used in conjunction with any existing defense mechanism to provide more resilience against adversarial attacks that can be afforded by these defense mechanisms alone. Lastly, to quantify the increase in robustness of an ensemble-based classification system when we use MTDeep, we analyze the properties of a set of DNNs and introduce the concept of differential immunity that formalizes the notion of attack transferability.
机译:当前的攻击方法可能会使基于深度神经网络的最新分类系统对每个经过广告修改的测试示例进行错误分类。针对各种类型攻击的通用防御策略的设计仍然是一个具有挑战性的问题。在本文中,我们从网络安全和多主体系统领域中汲取了灵感,并建议利用移动目标防御(MTD)的概念来设计元防御,以``增强''深度神经网络整体的鲁棒性( DNN),以进行针对此类对抗攻击的视觉分类任务。为了在测试时对输入图像进行分类,基于混合策略随机选择组成网络。为了获得此策略,我们将防御者(托管分类网络)与其(合法和恶意)用户之间的交互关系公式化为贝叶斯Stackelberg游戏(BSG)。我们凭经验表明,我们的方法MTDeep可以减少各种数据集(例如MNIST,FashionMNIST和ImageNet)在扰动图像上的错误分类,同时在合法测试图像上保持较高的分类精度。然后,我们证明,作为第一种元防御技术,我们的框架可以与任何现有防御机制结合使用,以提供对这些防御机制单独可以提供的对抗攻击的更多弹性。最后,为了量化在使用MTDeep时基于集成的分类系统的鲁棒性增加,我们分析了一组DNN的属性,并引入了区分免疫的概念,该概念将攻击可传递性的概念形式化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号