...
首页> 外文期刊>International Journal of Intelligent Systems >Defense against adversarial attacks by low-level image transformations
【24h】

Defense against adversarial attacks by low-level image transformations

机译:通过低级图像转换防范对抗性攻击

获取原文
获取原文并翻译 | 示例

摘要

Deep neural networks (DNNs) are vulnerable to adversarial examples, which can fool classifiers by maliciously adding imperceptible perturbations to the original input. Currently, a large number of research on defending adversarial examples pay little attention to the real-world applications, either with high computational complexity or poor defensive effects. Motivated by this observation, we develop an efficient preprocessing module to defend adversarial attacks. Specifically, before an adversarial example is fed into the model, we perform two low-level image transformations, WebP compression and flip operation, on the picture. Then we can get a de-perturbed sample that can be correctly classified by DNNs. WebP compression is utilized to remove the small adversarial noises. Due to the introduction of loop filtering, there will be no square effect like JPEG compression, so the visual quality of the denoised image is higher. And flip operation, which flips the image once along one side of the image, destroys the specific structure of adversarial perturbations. By taking class activation mapping to localize the discriminative image regions, we show that flipping image may mitigate adversarial effects. Extensive experiments demonstrate that the proposed scheme outperforms the state-of-the-art defense methods. It can effectively defend adversarial attacks while ensuring only slight accuracy drops on normal images.
机译:深度神经网络(DNN)容易受到对抗的例子,其可以通过恶意地向原始输入添加不可察觉的扰动来欺骗分类器。目前,对捍卫对抗性示例的大量研究几乎没有关注真实世界的应用,无论是高的计算复杂性还是防守效果差。通过这种观察,我们开发了一种有效的预处理模块来防御对抗性攻击。具体地,在馈送对侵犯示例中的模型之前,我们在图片上执行两个低级图像变换,WEPP压缩和翻转操作。然后我们可以获得可通过DNN正确分类的去扰动样本。使用WPP压缩以消除小的对抗性噪声。由于引入环路滤波,没有像JPEG压缩等平方效果,因此去噪图像的视觉质量较高。并翻转操作,沿着图像的一侧翻转图像一次,破坏对抗扰动的特定结构。通过乘坐激活映射来定位鉴别图像区域,我们表明翻转图像可以减轻对抗效应。广泛的实验表明,所提出的方案优于最先进的防御方法。它可以有效地防御对抗性攻击,同时确保在正常图像上仅略微精度下降。

著录项

  • 来源
    《International Journal of Intelligent Systems 》 |2020年第10期| 1453-1466| 共14页
  • 作者单位

    Anhui Provincial Key Laboratory of Multimodal Cognitive Computation School of Computer Science and Technology Anhui University Hefei China;

    Anhui Provincial Key Laboratory of Multimodal Cognitive Computation School of Computer Science and Technology Anhui University Hefei China;

    Anhui Provincial Key Laboratory of Multimodal Cognitive Computation School of Computer Science and Technology Anhui University Hefei China;

    Anhui Provincial Key Laboratory of Multimodal Cognitive Computation School of Computer Science and Technology Anhui University Hefei China;

    Anhui Provincial Key Laboratory of Multimodal Cognitive Computation School of Computer Science and Technology Anhui University Hefei China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    adversarial examples; deep neural networks; flip operation; image transformations; WebP compression;

    机译:对抗例子;深神经网络;翻转操作;图像变换;WEPP压缩;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号