首页> 外文期刊>IEEE transactions on mobile computing >FLASH: Heterogeneity-Aware Federated Learning at Scale
【24h】

FLASH: Heterogeneity-Aware Federated Learning at Scale

机译:

获取原文
获取原文并翻译 | 示例

摘要

Federated learning (FL) becomes a promising machine learning paradigm. The impact of heterogeneous hardware specifications and dynamic states on the FL process has not yet been studied systematically. This paper presents the first large-scale study of this impact based on real-world data collected from 136k smartphones. We conducted extensive experiments on our proposed heterogeneity-aware FL platform namely FLASH, to systematically explore the performance of state-of-the-art FL algorithms and key FL configurations in heterogeneity-aware and -unaware settings, finding the following. (1) Heterogeneity causes accuracy to drop by up to 9.2 and convergence time to increase by 2.32×. (2) Heterogeneity negatively impacts popular aggregation algorithms, e.g., the accuracy variance reduction brought by q-FedAvg drops by 17.5. (3) Heterogeneity does not worsen the accuracy loss caused by gradient-compression algorithms significantly, but it compromises the convergence time by up to 2.5×. (4) Heterogeneity hinders client-selection algorithms from selecting wanted clients, thus reducing effectiveness. e.g., the accuracy increase brought by the state-of-the-art client-selection algorithm drops by 73.9. (5) Heterogeneity causes the optimal FL hyper-parameters to drift significantly. More specifically, the heterogeneity-unaware setting favors looser deadline and higher reporting fraction to achieve better training performance. (6) Heterogeneity results in non-trivial failed clients (more than 10) and leads to participation bias (the top 30 of clients contribute 86 of computations). Our FLASH platform and data have been publicly open sourced.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号