...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Towards Understanding and Improving the Transferability of Adversarial Examples in Deep Neural Networks
【24h】

Towards Understanding and Improving the Transferability of Adversarial Examples in Deep Neural Networks

机译:对理解和提高深神经网络中对抗性实例的可转移性

获取原文
           

摘要

Currently it is well known that deep neural networks are vulnerable to adversarial examples, constructed by applying small but malicious perturbations to the original inputs. Moreover, the perturbed inputs can transfer between different models: adversarial examples generated based on a specific model will often fool other unseen models with a significant success rate. This allows the adversary to leverage it to attack the deployed systems without any query, which could raise severe security issue particularly in safety-critical scenarios. In this work, we empirically investigate two classes of factors that might influence the transferability of adversarial examples. One is about model-specific factors, including network architecture, model capacity and test accuracy. The other is the local smoothness of loss surface for generating adversarial examples. More importantly, relying on these findings on the transferability of adversarial examples, we propose a simple but effective strategy to improve the transferability, whose effectiveness is confirmed through extensive experiments on both CIFAR-10 and ImageNet datasets.
机译:目前众所周知,深神经网络容易受到对抗的例子,通过对原始输入应用小而是恶意扰动构成。此外,扰动输入可以在不同模型之间传输:基于特定模型生成的对手示例通常会愚弄具有显着成功率的其他看不见的模型。这允许对手利用它来攻击部署的系统而不进行任何查询,这可能会在安全关键方案中提高严重的安全问题。在这项工作中,我们经验验证了两种可能影响对抗例的转移性的因素。一个是关于特定于模型的因素,包括网络架构,模型容量和测试精度。另一个是用于产生对抗性示例的损失表面的局部平滑度。更重要的是,依靠这些调查结果对逆势实例的可转移性,我们提出了一种简单但有效的策略来提高可转移性,其有效性通过广泛的CIFAR-10和Imagenet数据集进行了大量实验。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号