...
首页> 外文期刊>Multimedia Tools and Applications >CNN adversarial attack mitigation using perturbed samples training
【24h】

CNN adversarial attack mitigation using perturbed samples training

机译:CNN使用扰动样本培训的对抗攻击缓解

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Susceptibility to adversarial examples is one of the major concerns in convolutional neural networks (CNNs) applications. Training the model with adversarial examples, known as adversarial training, is a common countermeasure to tackle such attacks. In reality, however, defenders are uninformed about how adversarial examples are generated by the attacker. Therefore, it is pivotal to utilize more general alternatives to intrinsically improve the robustness of models. For this purpose, we train CNNs with perturbed samples manipulated by various transformations and contaminated by different noises to foster robustness of networks against adversarial attacks. This idea derived from the fact that both adversarial and noisy samples undermine the classifier accuracy. We propose combination of a convolutional denoising autoencoder with a classifier (CDAEC) as a defensive structure. The proposed method does not add to the computational cost. Experimental results on MNIST database demonstrate that the accuracy of CDAEC trained by perturbed samples against adversarial attacks was more than 71.29%.
机译:对抗性示例的易感性是卷积神经网络(CNNS)应用中的主要问题之一。培训具有对抗性实例的模型,称为对抗性培训,是解决此类攻击的共同对策。然而,实际上,捍卫者涉及攻击者如何产生对抗性示例。因此,利用更普遍的替代方案来衡量模型的稳健性。为此目的,我们用各种转化操纵的扰动样本训练CNN,并被不同的噪声污染,以促进网络对抗对抗攻击的鲁棒性。这种想法源于对抗性和嘈杂的样本破坏了分类器精度的事实。我们提出了一种卷积的去噪自动化器与分类器(CDAEC)作为防御结构的组合。该方法不增加计算成本。 MNIST数据库的实验结果表明,扰动样品对抗对抗性攻击的CDAEC准确性超过71.29%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号