首页> 外文会议>IEEE Symposium Series on Computational Intelligence >Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks
【24h】

Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks

机译:不断发展的超参数,用于训练对抗对抗攻击的深度神经网络

获取原文

摘要

Deep neural networks have been found to be vulnerable to adversarial attacks. To address this challenge, this paper adopts the evolutionary multi-objective approach to the learning process, and manages to achieve a balance between learning accuracy and robustness against adversarial attacks. In addition, we propose to minimize the model complexity together with the adversarial training loss to defend against fast gradient signed method attacks. Our experimental results using two deep neural networks models, LeNet-5 and VGG-11, on MNIST and CIFAR-10 datasets, respectively, confirm that the proposed methods are effective in improving the robustness of deep learning models against adversarial attacks.
机译:已经发现,深度神经网络容易受到对抗性攻击。为了应对这一挑战,本文在学习过程中采用了进化的多目标方法,并设法在学习准确性和针对对抗性攻击的鲁棒性之间取得平衡。另外,我们建议最小化模型复杂度以及对抗训练损失,以防御快速梯度签名方法攻击。我们分别在MNIST和CIFAR-10数据集上使用两个深层神经网络模型LeNet-5和VGG-11进行的实验结果证实,所提出的方法可以有效地提高深度学习模型抵抗对抗性攻击的鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号