首页> 外文会议>IEEE Conference on Computer Communications Workshops >Privacy-Preserving Decentralized Aggregation for Federated Learning
【24h】

Privacy-Preserving Decentralized Aggregation for Federated Learning

机译:隐私保留联合学习的分散聚合

获取原文

摘要

In this paper, we develop a privacy-preserving decentralized aggregation protocol for federated learning. We formulate the distributed aggregation protocol with the Alternating Direction Method of Multiplier (ADMM) algorithm and examine its privacy challenges. Unlike prior works that use differential privacy or homomorphic encryption for privacy, we develop a protocol that controls communication among participants in each round of aggregation to minimize privacy leakage. We establish the protocol's privacy guarantee against an honest-but-curious adversary. We also propose an efficient algorithm to construct such a communication pattern, which is inspired by combinatorial block design theory. Our secure aggregation protocol based on the novel group-based communication pattern leads to an efficient algorithm for federated training with privacy guarantees. We evaluate our federated training algorithm on computer vision and natural language processing models over benchmark datasets with 9 and 15 distributed sites. Experimental results demonstrate the privacy-preserving capabilities of our algorithm while maintaining learning performance comparable to the baseline centralized federated learning.
机译:在本文中,我们开发了一个用于联合学习的隐私保留的分散聚合协定。我们用乘数(ADMM)算法的交替方向方法制定分布式聚合协议,并检查其隐私挑战。与使用差异隐私或同性恋加密进行隐私的先前作品不同,我们开发一个控制每轮聚合中参与者之间的通信的协议,以最大限度地减少隐私泄漏。我们建立了诚实但好奇的对手的议定书的隐私保障。我们还提出了一种有效的算法来构造这种通信模式,这是由组合块设计理论的启发。我们基于新型组的通信模式的安全聚合协议导致了具有隐私保证的联合培训的有效算法。我们在具有9和15个分布站点的基准数据集中评估计算机视觉和自然语言处理模型的联邦培训算法。实验结果展示了我们算法的隐私保留能力,同时保持与基线集中联合学习的学习性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号