首页> 外文会议>IEEE Computer Society Annual Symposium on VLSI >MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
【24h】

MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

机译:MAT:一种多强度对抗训练方法,可减轻对抗攻击

获取原文

摘要

Some recent work revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN's resilience to adversarial attacks, namely, adversarial training. Our experiments show that different adversarial strengths, i.e., perturbation levels of adversarial examples, have different working ranges to resist the attacks. Based on the observation, we propose a multi-strength adversarial training method (MAT) that combines the adversarial training examples with different adversarial strengths to defend adversarial attacks. Two training structures-mixed MAT and parallel MAT-are developed to facilitate the tradeoffs between training time and hardware cost. Our results show that MAT can substantially minimize the accuracy degradation of deep learning systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN. The tradeoffs between training time, robustness, and hardware cost are also well discussed on a FPGA platform.
机译:最近的一些工作表明,深度神经网络(DNN)容易受到所谓的对抗攻击,在这种攻击中,故意干扰输入示例以愚弄DNN。在这项工作中,我们将重新审视DNN训练过程,该过程将对抗性示例包括到训练数据集中,以提高DNN对对抗性攻击(即对抗性训练)的适应性。我们的实验表明,不同的对抗强度(即对抗示例的摄动水平)具有不同的抵抗攻击的工作范围。基于观察结果,我们提出了一种多强度对抗训练方法(MAT),该方法结合了具有不同对抗强度的对抗训练示例来防御对抗攻击。开发了两种训练结构-混合MAT和并行MAT-以促进训练时间和硬件成本之间的权衡。我们的结果表明,MAT可以极大地降低深度学习系统对MNIST,CIFAR-10,CIFAR-100和SVHN的对抗性攻击的准确性下降。在FPGA平台上也很好地讨论了培训时间,鲁棒性和硬件成本之间的折衷。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号