...
首页> 外文期刊>IEEE Transactions on Parallel and Distributed Systems >FedSCR: Structure-Based Communication Reduction for Federated Learning
【24h】

FedSCR: Structure-Based Communication Reduction for Federated Learning

机译:FEDSCR:联合学习的基于结构的沟通减少

获取原文
获取原文并翻译 | 示例

摘要

Federated Learning allows edge devices to collaboratively train a shared model on their local data without leaking user privacy. The non-independent-and-identically-distributed (Non-IID) property of data distribution, which leads to severe accuracy degradation, and enormous communication overhead for aggregating parameters should be tackled in federated learning. In this article, we conduct a detailed analysis of parameter updates on the Non-IID datasets and compare the difference with the IID setting. Experimental results exhibit that parameter update matrices are structure-sparse and show that more gradients could be identified as negligible updates on the Non-IID data. As a result, we propose a structure-based communication reduction algorithm, called FedSCR, that reduces the number of parameters transported through the network while maintaining the model accuracy. FedSCR aggregates the parameter updates over channels and filters, identifies and removes the redundant updates by comparing the aggregated values with a threshold. Unlike the traditional structured pruning methods, FedSCR retains the complete model that does not require to be retrained and fine-tuned. The local loss and weight divergence on each device vary a lot because of the unbalanced data distribution. We further propose an adaptive FedSCR, that dynamically changes the bounded threshold, to enhance the model robustness on the Non-IID data. Evaluation results show that our proposed strategies achieve almost 50 percent upstream communication reduction without loss of accuracy. FedSCR can be integrated into state-of-the-art federated learning algorithms to dramatically reduce the number of parameters pushed to the global server with a tolerable accuracy reduction.
机译:联合学习允许边缘设备在其本地数据上协作培训共享模型,而不会泄漏用户隐私。数据分布的非独立和相同分布(非IID)属性导致严重的精度下降,并且应在联合学习中解决聚集参数的巨大通信开销。在本文中,我们对非IID数据集进行了详细分析了对非IID数据集的参数更新,并与IID设置进行比较。实验结果表明,参数更新矩阵是结构稀疏的,并显示更多渐变可以被识别为非IID数据的可忽略可忽略的更新。结果,我们提出了一种基于结构的通信减少算法,称为FEDSCR,这减少了通过网络传输的参数的数量,同时保持模型精度。 FEDSCR通过比较具有阈值的聚合值来汇总频道和过滤器的参数更新,识别并删除冗余更新。与传统的结构化修剪方法不同,FEDSCR保留了不需要再培训和微调的完整模型。由于数据分布不平衡,每个设备上的局部损失和重量分歧都变化了很多。我们进一步提出了一种自适应FEDSCR,它动态地改变有界阈值,以增强非IID数据上的模型鲁棒性。评价结果表明,我们的拟议策略达到了近50%的上​​游通信减少而不会损失准确性。 FEDSCR可以集成到最先进的联合学习算法中,从而大大减少具有可容忍的精度减少到全局服务器的参数数量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号