首页> 外文期刊>Journal of electronic imaging >RGN-Defense: erasing adversarial perturbations using deep residual generative network
【24h】

RGN-Defense: erasing adversarial perturbations using deep residual generative network

机译:RGN-Defense:使用深度残留生成网络消除对抗性扰动

获取原文
获取原文并翻译 | 示例
           

摘要

In recent years, deep neural networks have achieved great success in various fields, especially in computer vision. However, recent investigations have shown that current state-of-the-art classification models are highly vulnerable to adversarial perturbations contained in the input examples. Therefore, we propose a defense methodology against the adversarial perturbations. Prior to a targeted network, adversarial perturbations are erased or mitigated via a deep residual generative network (RGN). Through adopting an auxiliary network VGG-19, the RGN is trained toward optimization of a joint loss, including low-level pixel loss, middle-level texture loss, and high-level task loss, thereby the restored examples are highly consistent with the original legitimate examples. We call our proposed defense based on RGN as RGN-Defense. It is an independent defense module that can be flexibly integrated with other defense strategy, for example, adversarial training, to construct a more powerful defense system. In the experiment, we evaluate our approach on ImageNet, and the comprehensive results have demonstrated robustness of RGN-Defense against current representative attacks. (C) 2019 SPIE and IS&T
机译:近年来,深度神经网络在各个领域都取得了巨大的成功,尤其是在计算机视觉领域。但是,最近的调查表明,当前的最新分类模型极易受到输入示例中包含的对抗性干扰的影响。因此,我们提出了一种对抗对抗扰动的防御方法。在有针对性的网络之前,对抗性干扰会通过深层残余生成网络(RGN)消除或减轻。通过采用辅助网络VGG-19,RGN被训练为优化联合损失,包括低级像素损失,中级纹理损失和高级任务损失,从而使恢复的示例与原始示例高度一致。正当的例子。我们称基于RGN的拟议防御为RGN-Defense。它是一个独立的防御模块,可以与其他防御策略(例如对抗训练)灵活地集成在一起,以构建功能更强大的防御系统。在实验中,我们评估了我们在ImageNet上的方法,综合结果证明了RGN-Defense对当前代表性攻击的鲁棒性。 (C)2019 SPIE和IS&T

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号