首页> 外文会议>European conference on computer vision >Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms
【24h】

Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms

机译:使用有效的查询机制对深层神经网络进行实用的黑匣子攻击

获取原文
获取外文期刊封面目录资料

摘要

Existing black-box attacks on deep neural networks (DNNs) have largely focused on transferability, where an adversarial instance generated for a locally trained model can "transfer" to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model's class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% attack success rates for both targeted and untargeted attacks on DNNs. We carry out a thorough comparative evaluation of black-box attacks and show that Gradient Estimation attacks achieve attack success rates similar to state-of-the-art white-box attacks on the MNIST and CIFAR-10 datasets. We also apply the Gradient Estimation attacks successfully against real-world classifiers hosted by Clarifai. Further, we evaluate black-box attacks against state-of-the-art defenses based on adversarial training and show that the Gradient Estimation attacks are very effective even against these defenses.
机译:对深度神经网络(DNN)的现有黑盒攻击主要集中在可转移性上,在该可转移性中,为本地训练的模型生成的对抗实例可以“转移”以攻击其他学习模型。在本文中,我们为不依赖于可传递性的查询访问目标模型的类概率的攻击者提出了一种新颖的梯度估计黑盒攻击。我们还提出了将输入的维数与生成每个对抗样本所需的查询数量脱钩的策略。对于DNN的针对性和非针对性攻击,我们的攻击的迭代变体均达到接近100%的攻击成功率。我们对黑盒攻击进行了全面的比较评估,结果表明,梯度估计攻击的攻击成功率与MNIST和CIFAR-10数据集上最先进的白盒攻击相似。我们还成功地对由Clarifai托管的现实世界分类器应用了Gradient Estimation攻击。此外,我们根据对抗训练评估了针对最先进防御的黑匣子攻击,并表明即使针对这些防御,梯度估计攻击也非常有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号