首页> 外文会议>IEEE Symposium on Security and Privacy >Helen: Maliciously Secure Coopetitive Learning for Linear Models
【24h】

Helen: Maliciously Secure Coopetitive Learning for Linear Models

机译:海伦:恶意保护线性模型的竞争性学习

获取原文

摘要

Many organizations wish to collaboratively train machine learning models on their combined datasets for a common benefit (e.g., better medical research, or fraud detection). However, they often cannot share their plaintext datasets due to privacy concerns and/or business competition. In this paper, we design and build Helen, a system that allows multiple parties to train a linear model without revealing their data, a setting we call coopetitive learning. Compared to prior secure training systems, Helen protects against a much stronger adversary who is malicious and can compromise m-1 out of m parties. Our evaluation shows that Helen can achieve up to five orders of magnitude of performance improvement when compared to training using an existing state-of-the-art secure multi-party computation framework.
机译:许多组织希望在其合并的数据集上共同训练机器学习模型,以获得共同的利益(例如,更好的医学研究或欺诈检测)。但是,由于隐私问题和/或业务竞争,他们通常无法共享其纯文本数据集。在本文中,我们设计并构建了Helen,这是一个允许多方训练线性模型而又不泄露其数据的系统,我们将这种情况称为竞争学习。与以前的安全培训系统相比,Helen可以防御更强大的对手,后者可以恶意杀害m个方中的m-1个。我们的评估表明,与使用现有的最新安全多方计算框架进行培训相比,Helen可以将性能提高多达五个数量级。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号