首页> 外文会议>IEEE Symposium on Security and Privacy >Membership Inference Attacks Against Machine Learning Models
【24h】

Membership Inference Attacks Against Machine Learning Models

机译:针对机器学习模型的成员推理攻击

获取原文

摘要

We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.
机译:我们定量研究机器学习模型如何泄漏有关对其进行训练的单个数据记录的信息。我们关注基本的成员推理攻击:给定数据记录和对模型的黑匣子访问,确定记录是否在模型的训练数据集中。为了对目标模型执行隶属推理,我们在对抗性上使用了机器学习,并训练了自己的推理模型,以识别目标模型在训练后的输入与未训练在输入上的预测之间的差异。我们对由商业“机器学习即服务”提供商(例如Google和Amazon)训练的分类模型进行经验评估,以评估我们的推理技术。使用现实的数据集和分类任务,包括从隐私角度来看其成员身份敏感的医院出院数据集,我们证明这些模型可能容易受到成员身份推断攻击。然后,我们调查影响此泄漏的因素并评估缓解策略。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号