首页> 外文会议>Annual Conference on Privacy, Security and Trust >Model Inversion Attacks for Prediction Systems: Without Knowledge of Non-Sensitive Attributes
【24h】

Model Inversion Attacks for Prediction Systems: Without Knowledge of Non-Sensitive Attributes

机译:预测系统的模型反演攻击:不了解非敏感属性

获取原文

摘要

While online services based on machine learning (ML) have been attracting considerable attention in both academic and business, privacy issues are becoming a threat that cannot be ignored. Recently, Fredrikson et al. [USENIX 2014] proposed a new paradigm of model inversion attacks, which allows an adversary to expose the sensitive information of users by using an ML system for an unintended purpose. In particular, the attack reveals the sensitive attribute values of the target user by using their non-sensitive attributes and the output of the ML model. Here, for the attack to succeed, the adversary needs to possess the non-sensitive attribute values of the target user prior to the attack. However, in reality, even if this information (i.e., non-sensitive attributes) is not necessarily information the user regards as sensitive, it may be difficult for the adversary to actually acquire it. In this paper, we propose a general model inversion (GMI) framework to capture the above scenario where knowledge of the non-sensitive attributes is not necessarily provided. Here, our framework also captures the scenario of Fredrikson et al. Notably, we generalize the paradigm of Fredrikson et al. by additionally modeling the amount of auxiliary information the adversary possesses at the time of the attack. Our proposed GMI framework enables a new type of model inversion attack for prediction systems, which can be carried out without knowledge of the non-sensitive attributes. At a high level, we use the paradigm of data poisoning in a novel way and inject malicious data into the set of training data to modify the ML model into a target ML model, which we can attack without having to have knowledge of the non-sensitive attributes. Our new attack enables the inference of sensitive attributes in the user input from only the output of the ML model, even when the non-sensitive attributes of the user are not available to the adversary. Finally, we provide a concrete algorithm of our model inversion attack on prediction systems based on linear regression models, and give a detailed description of how the data poisoning algorithm is constructed.We evaluate the performance of our new model inversion attack without the knowledge of non-sensitive attributes through experiments with actual data sets.
机译:尽管基于机器学习(ML)的在线服务已在学术和企业界引起了相当大的关注,但隐私问题已成为不可忽视的威胁。最近,Fredrikson等人。 [USENIX 2014]提出了一种新的模型反转攻击范式,它允许对手通过使用ML系统出于非预期目的而暴露用户的敏感信息。特别是,攻击通过使用目标用户的非敏感属性和ML模型的输出来揭示目标用户的敏感属性值。在这里,为了使攻击成功,对手需要在攻击之前拥有目标用户的非敏感属性值。然而,实际上,即使该信息(即非敏感属性)不一定是用户认为敏感的信息,对手也可能难以实际获取它。在本文中,我们提出了一个通用模型反演(GMI)框架来捕获上述情况,在这种情况下,不必提供非敏感属性的知识。在这里,我们的框架还捕获了Fredrikson等人的场景。值得注意的是,我们归纳了Fredrikson等人的范例。通过对攻击者在攻击时拥有的辅助信息量进行额外建模。我们提出的GMI框架为预测系统提供了一种新型的模型反演攻击,可以在不了解非敏感属性的情况下进行攻击。在较高的层次上,我们以新颖的方式使用数据中毒的范例,并将恶意数据注入训练数据集中,以将ML模型修改为目标ML模型,而我们无需了解非目标模型就可以进行攻击。敏感属性。即使攻击者无法使用用户的非敏感属性,我们的新攻击也只能从ML模型的输出推断用户输入中的敏感属性。最后,我们提供了基于线性回归模型的预测系统模型逆向攻击的具体算法,并详细描述了如何构建数据中毒算法。通过对实际数据集进行实验来确定敏感属性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号