首页> 外文期刊>Information Sciences: An International Journal >Privacy-Preserving distributed deep learning based on secret sharing
【24h】

Privacy-Preserving distributed deep learning based on secret sharing

机译:基于秘密共享的隐私保留分布式深度学习

获取原文
获取原文并翻译 | 示例
           

摘要

Distributed deep learning (DDL) naturally provides a privacy-preserving solution to enable multiple parties to jointly learn a deep model without explicitly sharing the local datasets. However, the existing privacy-preserving DDL schemes still suffer from severe information leakage and/or lead to significant increase of the communication cost. In this work, we design a privacy-preserving DDL framework such that all the participants can keep their local datasets private with low communication and computational cost, while still maintaining the accuracy and efficiency of the learned model. By adopting an effective secret sharing strategy, we allow each participant to split the intervening parameters in the training process into shares and upload an aggregation result to the cloud server. We can theoretically show that the local dataset of a particular participant can be well protected against the honest-but-curious cloud server as well as the other participants, even under the challenging case that the cloud server colludes with some participants. Extensive experimental results are provided to validate the superiority of the proposed secret sharing based distributed deep learning (SSDDL) framework. (C) 2020 Elsevier Inc. All rights reserved.
机译:分布式深度学习(DDL)自然提供了一种隐私保留解决方案,以使多方能够共同学习深度模型,而无需明确共享本地数据集。然而,现有的隐私保留DDL方案仍然遭受严重的信息泄漏和/或导致通信成本的显着增加。在这项工作中,我们设计了一个隐私保留的DDL框架,使得所有参与者可以以低通信和计算成本私有的私有数据集,同时仍保持学习模型的准确性和效率。通过采用有效的秘密共享策略,我们允许每个参与者将培训过程中的中间参数分成共享并将聚合结果上传到云服务器。理论上我们可以从理论上表明特定参与者的本地数据集可以很好地保护诚实但好奇的云服务器以及其他参与者,即使是云服务器与一些参与者勾结的具有挑战性的情况。提供了广泛的实验结果,以验证基于秘密共享的分布式深度学习(SSDDL)框架的优越性。 (c)2020 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号