首页> 外文会议>Pacific-Rim conference on multimedia >AdvRefactor: A Resampling-Based Defense Against Adversarial Attacks
【24h】

AdvRefactor: A Resampling-Based Defense Against Adversarial Attacks

机译:AdvRefactor:基于对抗攻击的基于重采样的防御

获取原文

摘要

Deep neural networks have achieved great success in many domains. However, they are vulnerable to adversarial attacks, which generate adversarial examples by adding tiny perturbations to legitimate images. Previous studies providing defense mostly focus on modifying DNN models to mitigate adversarial attacks. We propose a resampling-based defense, AdvRefactor, which aims at transforming the inputs of models, and thereby eliminates the adversarial perturbations. We explore two resampling algorithms, proximal interpolation and bilinear interpolation, which cost less, suit more models and are combinable with other defenses. Our evaluation results demonstrate that AdvRefactor can significantly mitigate the adversarial attacks.
机译:深度神经网络在许多领域都取得了巨大的成功。但是,它们容易受到对抗性攻击,这种攻击通过向合法图像添加微小干扰来生成对抗性示例。先前提供防御的研究主要集中在修改DNN模型以减轻对抗性攻击方面。我们提出了基于重采样的防御,AdvRefactor,其目的是转换模型的输入,从而消除对抗性干扰。我们探索了两种重采样算法,即近端插值法和双线性插值法,它们成本更低,适用于更多模型并且可以与其他防御方法组合。我们的评估结果表明,AdvRefactor可以大大减轻对抗性攻击。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号