首页> 外文会议>International Conference on Science of Cyber Security >LET-Attack: Latent Encodings of Normal-Data Manifold Transferring to Adversarial Examples
【24h】

LET-Attack: Latent Encodings of Normal-Data Manifold Transferring to Adversarial Examples

机译:让攻击:正常数据歧管转移到对抗例的潜在编码

获取原文

摘要

Recent studies have highlighted the vulnerability and low robustness of deep learning model against adversarial examples. This issue limits their deployability on ubiquitous applications requiring a high level of security such as driverless system, unmanned aerial vehicle and intrusion detection. In this paper, we propose latent encodings transferring attack (LET-attack) to generate target natural adversarial examples to fool well-trained classifiers. In order to perturb in latent space, we train WGAN-variants on various datasets to achieve feature extraction, image reconstruction and image discrimination against counterfeit with good performance. Thanks to our two-stage procedure of mapping transformation, the adversary performs precise and semantic perturbations on source data referring to target data in latent space. By using the critic in WGAN-variant and the well-trained classifier, the adversary crafts more verisimilar and effective adversarial examples. As shown in the experimental results on MNIST, FashionMNIST, CIFAR-10 and LSUN, LET-attack can yield a distinct set of adversarial examples with partly data manifold targeted transfer and attains similar attack performance against state-of-the-art models in different attack scenarios. What is more, we evaluate LET-attack on the characteristic of transferability in different classifiers on MNIST and CIFAR-10 respectively, and find that the adversarial examples are easy to transfer with high confidence.
机译:最近的研究突出了对抗对抗例子的深层学习模型的脆弱性和低稳健性。此问题限制了他们对无处不在的应用程序的可部署性,需要高度安全性,如无人驾驶系统,无人驾驶飞行器和入侵检测。在本文中,我们提出潜在的编码转移攻击(让攻击)来产生目标自然对抗例,以欺骗训练有素的分类器。为了涉及潜伏空间,我们在各种数据集上培训Wngan - 变体,以实现对抗伪造的特征提取,图像重建和图像鉴别,具有良好的性能。由于我们的两级映射转换程序,对手对潜在空间中的目标数据进行了精确和语义扰动。通过在Wgan - 变体和训练有素的分类器中使用批评者,对手工艺更加严重和有效的对抗例。正如在MNIST,FashionMNIST,CIFAR-10和LSUN的实验结果示出,LET攻击可以产生一组不同的对抗性实例与部分数据歧管靶向传递并达到对国家的最先进的模型在不同的类似的攻击表现攻击情景。更重要的是,我们分别评估了令人令攻击Mnist和CiFar-10对不同分类器中的可转移性的特征,发现对抗实例很容易以高信任转移。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号