【24h】

Dynamic Federated Learning

机译:动态联合学习

获取原文

摘要

Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments. While many federated learning architectures process data in an online manner, and are hence adaptive by nature, most performance analyses assume static optimization problems and offer no guarantees in the presence of drifts in the problem solution or data characteristics. We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data. Under a nonstationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm. The results clarify the trade-off between convergence and tracking performance.
机译:联合学习已成为多代理环境中集中式协调策略的总称。尽管许多联合学习体系结构以在线方式处理数据,因此具有天生的适应性,但是大多数性能分析都假定存在静态优化问题,并且在问题解决方案或数据特征出现漂移时不能提供任何保证。我们考虑一种联合学习模型,其中在每次迭代中,可用代理的随机子集都会根据其数据执行本地更新。在针对集合优化问题的真实最小化器上的非平稳随机游动模型下,我们确定架构的性能由三个因素决定,即,每个代理的数据可变性,所有代理的模型可变性以及跟踪项与算法的学习率成反比。结果阐明了收敛性和跟踪性能之间的权衡。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号