首页> 外文期刊>Neurocomputing >Improving resistance to adversarial deformations by regularizing gradients
【24h】

Improving resistance to adversarial deformations by regularizing gradients

机译:通过规律梯度提高对逆势变形的抗性

获取原文
获取原文并翻译 | 示例

摘要

Improving the resistance of deep neural networks against adversarial attacks is important for deploying models in realistic applications. Nowadays, most defense methods are designed to resist intensity perturbations, and location perturbations have not yet attracted enough attention. However, these two types should be equally important for deep model security. In this paper, we focus on adversarial deformations, a typical class of location perturbations, and propose a defense method named flow gradient regularization to improve the resistance of models against such attacks. By theoretical analysis, we prove that regularizing flow gradients is able to get a tighter bound than regularizing input gradients. Through verifying over multiple datasets, network architectures, and adversarial deformations, our empirical results indicate that training with flow gradients performs better than training with input gradients by a large margin, and also better than adversarial training. Moreover, the proposed method can be used to combine with adversarial deformation training to improve the resistance further. Our method is now available at https://github.com/xpf/Flow-Gradient-Regularization. (c) 2021 Elsevier B.V. All rights reserved.
机译:改善深度神经网络对抗对抗攻击的阻力对于在现实应用中部署模型是重要的。如今,大多数防御方法都设计用于抵抗强度扰动,并且位置扰动尚未引起足够的重视。但是,这两种类型对于深度模型安全性同样重要。在本文中,我们专注于对抗性变形,典型的位置扰动,并提出了一种名为流梯度正则化的防御方法,以提高模型对抗这种攻击的阻力。通过理论分析,我们证明了规则的流程梯度能够比正规输入渐变更紧密。通过验证多个数据集,网络架构和对抗性变形,我们的经验结果表明,流动梯度的训练比使用大幅的训练更好地表现优于输入梯度,并且也比对抗性训练更好。此外,所提出的方法可用于与对抗性变形训练结合以进一步改善电阻。我们的方法现在可以在https://github.com/xpf/flow-gradient-regularization中获得。 (c)2021 elestvier b.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号