首页> 外文期刊>International journal of software science and computational intelligence >Defending Deep Learning Models Against Adversarial Attacks
【24h】

Defending Deep Learning Models Against Adversarial Attacks

机译:防止对抗对抗攻击的深度学习模型

获取原文
获取原文并翻译 | 示例
           

摘要

Deep learning (DL) has been used globally in almost every sector of technology and society. Despite its huge success, DL models and applications have been susceptible to adversarial attacks, impacting the accuracy and integrity of these models. Many state-of-the-art models are vulnerable to attacks by well-crafted adversarial examples, which are perturbed versions of clean data with a small amount of noise added, imperceptible to the human eyes, and can quite easily fool the targeted model. This paper introduces six most effective gradient-based adversarial attacks on the ResNet image recognition model, and demonstrates the limitations of traditional adversarial retraining technique. The authors then present a novel ensemble defense strategy based on adversarial retraining technique. The proposed method is capable of withstanding the six adversarial attacks on cifar10 dataset with accuracy greater than 89.31% and as high as 96.24%. The authors believe the design methodologies and experiments demonstrated are widely applicable to other domains of machine learning, DL, and computation intelligence securities.
机译:深度学习(DL)几乎在技术和社会部门的全球范围内使用。尽管取得了巨大的成功,但DL模型和应用易于对抗对抗攻击,影响了这些模型的准确性和完整性。许多最先进的模型很容易受到精心制作的对手示例的攻击,这是具有少量噪声的清洁数据的扰动版本,对人眼不可察觉,并且可以很容易地欺骗目标模型。本文介绍了对Reset图像识别模型的六种最有效的基于梯度的对抗攻击,并展示了传统的逆转刷新技术的局限性。作者当时提出了一种基于对抗性刷新技术的新型集合防御策略。该方法能够承受高于CiFar10数据集的六次对抗性攻击,精度大于89.31%,高达96.24%。作者认为,所证明的设计方法和实验广泛适用于机器学习,DL和计算智能证券的其他领域。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号