首页> 外文会议>IEEE European Symposium on Security and Privacy >PRADA: Protecting Against DNN Model Stealing Attacks
【24h】

PRADA: Protecting Against DNN Model Stealing Attacks

机译:PRADA:防止DNN模型窃取攻击

获取原文

摘要

Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
机译:机器学习(ML)应用越来越普遍。保护ML模型的机密性至关重要,其原因有两个:(a)模型可以为其所有者带来业务优势;(b)对手可能使用被盗模型来查找可转移的对抗性示例,从而避免按原始模型进行分类。可以仅通过定义明确的预测API来限制对模型的访问。尽管如此,预测API仍然提供了足够的信息,以允许对手通过通过预测API发送重复的查询来发起模型提取攻击。在本文中,我们描述了使用新颖方法生成合成查询并优化训练超参数的新模型提取攻击。就目标和非目标对抗示例的可传递性(高达+ 29-44个百分点,pp)以及两个目标的预测准确性(高达+46 pp)而言,我们的攻击均优于最新模型的提取数据集。我们提供有关如何执行有效的模型提取攻击的内容。然后,我们提出PRADA,这是通向DNN模型提取攻击的通用有效检测的第一步。它分析连续API查询的分布,并在该分布偏离良性行为时发出警报。我们证明PRADA可以检测到所有先前的模型提取攻击,而不会产生误报。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号