首页> 外文会议>IEEE Conference on Computer Communications Workshops >Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning
【24h】

Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning

机译:适应性联邦辍学:提高联合学习的沟通效率和泛化

获取原文
获取外文期刊封面目录资料

摘要

To exploit the wealth of data generated and located at distributed entities such as mobile phones, a revolutionary decentralized machine learning setting, known as federated learning, enables multiple clients to collaboratively learn a machine learning model while keeping all their data on-device. However, the scale and decentralization of federated learning present new challenges. Communication between the clients and the server is considered a main bottleneck in the convergence time of federated learning because of a very large number of model’s weights that need to be exchanged in each training round. In this paper, we propose and study Adaptive Federated Dropout (AFD), a novel technique to reduce the communication costs associated with federated learning. It optimizes both server-client communications and computation costs by allowing clients to train locally on a selected subset of the global model. We empirically show that this strategy, combined with existing compression methods, collectively provides up to 57× reduction in convergence time. It also outperforms the state-of-the-art solutions for communication efficiency. Furthermore, it improves model generalization by up to 1.7%.
机译:为了利用生成的数据和位于移动电话等分布式实体的数据,革命性的分散机器学习设置,称为联合学习,使多个客户能够协同学习机器学习模型,同时保持其所有数据的设备。然而,联邦学习的规模和权力下放存在新的挑战。客户端和服务器之间的通信被认为是联合学习的收敛时间的主要瓶颈,因为需要在每个训练中需要交换的模型的重量。在本文中,我们提出并研究了自适应联邦辍学(AFD),一种新颖的技术,以降低与联合学习相关的通信成本。它通过允许客户端在全局模型的所选子集上培训客户端来优化服务器 - 客户端通信和计算成本。我们经验表明,这种策略与现有的压缩方法相结合,共同提供了高达57倍的收敛时间。它还优于通信效率的最先进的解决方案。此外,它将模型概括提高了1.7%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号