...
首页> 外文期刊>IEEE Transactions on Knowledge and Data Engineering >Adversarial Deep Learning Models with Multiple Adversaries
【24h】

Adversarial Deep Learning Models with Multiple Adversaries

机译:具有多个对手的对抗性深度学习模型

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

We develop an adversarial learning algorithm for supervised classification in general and Convolutional Neural Networks (CNN) in particular. The algorithm's objective is to produce small changes to the data distribution defined over positive and negative class labels so that the resulting data distribution is misclassified by the CNN. The theoretical goal is to determine a manipulating change on the input data that finds learner decision boundaries where many positive labels become negative labels. Then we propose a CNN which is secure against such unforeseen changes in data. The algorithm generates adversarial manipulations by formulating a multiplayer stochastic game targeting the classification performance of the CNN. The multiplayer stochastic game is expressed in terms of multiple two-player sequential games. Each game consists of interactions between two players-an intelligent adversary and the learner CNN-such that a player's payoff function increases with interactions. Following the convergence of a sequential noncooperative Stackelberg game, each two-player game is solved for the Nash equilibrium. The Nash equilibrium finds a pair of strategies (learner weights and evolutionary operations) from which there is no incentive for either learner or adversary to deviate. We then retrain the learner over all the adversarial manipulations generated by multiple players to propose a secure CNN which is robust to subsequent adversarial data manipulations. The adversarial data and corresponding CNN performance is evaluated on MNIST handwritten digits data. The results suggest that game theory and evolutionary algorithms are very effective in securing deep learning models against performance vulnerabilities simulated as attack scenarios from multiple adversaries.
机译:特别是,我们开发了一般和卷积神经网络(CNN)的监督分类的对抗性学习算法。该算法的目的是对正类和负类标签定义的数据分布产生小的变化,以便由CNN错误分类。理论目标是确定在查找学习者决策边界的输入数据上的操纵变化,其中许多正标签成为负标签。然后,我们提出了一种CNN,该CNN是安全的,这是针对这种无法预料的数据的变化。该算法通过制定针对CNN的分类性能的多人流动转换游戏来产生对手操纵。多人转换游戏以多个双人顺序游戏表示。每场比赛包括两个玩家之间的互动 - 一个智能对手和学习者CNN - 使得玩家的收益函数随着交互而增加。在连续的非合化堆栈游戏的收敛之后,每个双手游戏都解决了纳什均衡。纳什均衡发现了一对策略(学习者重量和进化操作),没有对学习者或对手偏离的激励。然后,我们将学习者恢复了多个玩家产生的所有对抗的操作,以提出一种安全的CNN,其对后续的对抗数据操纵具有鲁棒性。对Mnist手写数字数据进行对抗的数据和相应的CNN性能。结果表明,游戏理论和进化算法在保护深度学习模型时非常有效地防止性能脆弱性模拟,以模拟来自多个对手的攻击情景。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号