首页> 外文会议>Conference on Neural Information Processing Systems >Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
【24h】

Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training

机译:使用基于特征散射的对抗性培训防范对抗性攻击

获取原文

摘要

We introduce a feature scattering-based adversarial training approach for improving model robustness against adversarial attacks. Conventional adversarial training approaches leverage a supervised scheme (either targeted or non-targeted) in generating attacks for training, which typically suffer from issues such as label leaking as noted in recent works. Differently, the proposed approach generates adversarial images for training through feature scattering in the latent space, which is unsupervised in nature and avoids label leaking. More importantly, this new approach generates perturbed images in a collaborative fashion, taking the inter-sample relationships into consideration. We conduct analysis on model robustness and demonstrate the effectiveness of the proposed approach through extensively experiments on different datasets compared with state-of-the-art approaches. Code is available: https://github.com/Haichao-Zhang/FeatureScatter.
机译:我们介绍了一种采用基于散射的对抗培训方法,用于改善对抗对抗攻击的模型鲁棒性。 常规的对抗培训方法利用监督计划(目标或非目标)产生培训的攻击,这通常遭受标签泄漏等问题,如近期作品所指出的那样。 不同地,所提出的方法产生通过在潜在空间中的特征散射来训练的对抗性图像,这在性质上无监督并避免标签泄漏。 更重要的是,这种新方法以协作方式产生扰动图像,以考虑采样间关系。 我们对模型稳健性进行分析,并通过与最先进的方法相比,通过对不同数据集的广泛实验来证明所提出的方法的有效性。 代码可用:https://github.com/haichao-zhang/featurescatter。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号