首页> 外文期刊>IEEE Journal on Selected Areas in Communications >GIFT: Toward Accurate and Efficient Federated Learning With Gradient-Instructed Frequency Tuning
【24h】

GIFT: Toward Accurate and Efficient Federated Learning With Gradient-Instructed Frequency Tuning

机译:GIFT: Toward Accurate and Efficient Federated Learning With Gradient-Instructed Frequency Tuning

获取原文
获取原文并翻译 | 示例
           

摘要

Federated learning (FL) enables distributed clients to collectively train a global model without revealing their private data, and for efficiency clients synchronize their gradients periodically. However, this can lead to the inaccuracy in model convergence due to inconsistent data distributions among clients. In this work, we find that there is a strong correlation between FL accuracy loss and the synchronization frequency, and seek to fine tune the synchronization frequency at training runtime to make FL accurate and also efficient. Specifically, aware that under the FL privacy requirement only gradients can be utilized for making frequency tuning decisions, we propose a novel metric called gradient consistency, which can effectively reflect the training status despite the instability of realistic FL scenarios. We further devise a feedback-driven algorithm called Gradient-Instructed Frequency Tuning (GIFT), which adaptively increases or decreases the synchronization frequency based on the gradient consistency metric. We have implemented GIFT in PyTorch, and large-scale evaluations show that it can improve FL accuracy by up to 10.7% with a time reduction of 58.1%.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号