...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Black-box Adversarial Attacks with Limited Queries and Information
【24h】

Black-box Adversarial Attacks with Limited Queries and Information

机译:带有有限查询和信息的黑盒对抗攻击

获取原文
           

摘要

Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model. In practice, the threat model for real-world systems is often more restrictive than the typical black-box model where the adversary can observe the full output of the network on arbitrarily many chosen inputs. We define three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting. We develop new attacks that fool classifiers under these more restrictive threat models, where previous methods would be impractical or ineffective. We demonstrate that our methods are effective against an ImageNet classifier under our proposed threat models. We also demonstrate a targeted black-box attack against a commercial classifier, overcoming the challenges of limited query access, partial information, and other practical issues to break the Google Cloud Vision API.
机译:当前的基于神经网络的分类器即使在黑盒设置中也容易受到对抗性示例的攻击,而攻击者只能对模型进行查询。在实践中,用于现实世界系统的威胁模型通常比典型的黑匣子模型更具限制性,在典型的黑匣子模型中,攻击者可以在任意多个选择的输入上观察网络的完整输出。我们定义了三种现实的威胁模型,它们可以更准确地表征许多现实世界中的分类器:查询限制设置,部分信息设置和仅标签设置。在这些限制性更强的威胁模型下,我们开发了新的攻击来欺骗分类器,而先前的方法将是不切实际或无效的。我们证明了我们的方法在提出的威胁模型下对ImageNet分类器有效。我们还演示了针对商业分类器的有针对性的黑匣子攻击,克服了查询访问受限,部分信息和其他实际问题的挑战,从而破坏了Google Cloud Vision API。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号