首页> 外文期刊>Computational intelligence and neuroscience >Federated Learning Optimization Algorithm for Automatic Weight Optimal
【24h】

Federated Learning Optimization Algorithm for Automatic Weight Optimal

机译:联邦学习优化算法,实现自动权重优化

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Federated learning (FL), a distributed machine-learning framework, is poised to effectively protect data privacy and security, and it also has been widely applied in variety of fields in recent years. However, the system heterogeneity and statistical heterogeneity of FL pose serious obstacles to the global model’s quality. This study investigates server and client resource allocation in the context of FL system resource efficiency and offers the FedAwo optimization algorithm. This approach combines adaptive learning with federated learning, and makes full use of the computing resources of the server to calculate the optimal weight value corresponding to each client. This approach aggregated the global model according to the optimal weight value, which significantly minimizes the detrimental effects of statistical and system heterogeneity. In the process of traditional FL, we found that a large number of client trainings converge earlier than the specified epoch. However, according to the provisions of traditional FL, the client still needs to be trained for the specified epoch, which leads to the meaningless of a large number of calculations in the client. To further lower the training cost, the augmentation FedAwo * algorithm is proposed. The FedAwo * algorithm takes into account the heterogeneity of clients and sets the criteria for local convergence. When the local model of the client reaches the criteria, it will be returned to the server immediately. In this way, the epoch of the client can dynamically be modified adaptively. A large number of experiments based on MNIST and Fashion-MNIST public datasets reveal that the global model converges faster and has higher accuracy in FedAwo and FedAwo * algorithms than FedAvg, FedProx, and FedAdp baseline algorithms.
机译:联邦学习(FL)是一种分布式机器学习框架,有望有效保护数据隐私和安全,近年来在各个领域也得到了广泛的应用。然而,联邦学习的系统异质性和统计异质性严重阻碍了全局模型的质量。本研究在联邦学习系统资源效率的背景下调查了服务器和客户端的资源分配,并提出了FedAwo优化算法。该方法将自适应学习与联邦学习相结合,充分利用服务器的计算资源,计算出每个客户端对应的最优权重值。该方法根据最优权重值聚合全局模型,从而显着最小化了统计和系统异质性的不利影响。在传统联邦学习过程中,我们发现大量的客户端训练早于指定纪元收敛。但是,根据传统联邦学习的规定,客户端仍然需要针对指定的 epoch 进行训练,这导致客户端中的大量计算毫无意义。为了进一步降低训练成本,该文提出一种增强FedAwo*算法。FedAwo * 算法考虑了客户端的异构性,并设定了本地收敛的标准。当客户端的本地模型达到标准时,将立即返回给服务器。这样,客户端的纪元就可以自适应地动态修改。基于MNIST和Fashion-MNIST公开数据集的大量实验表明,与FedAvg、FedProx和FedAdp基线算法相比,FedAwo和FedAwo*算法的全局模型收敛速度更快,精度更高。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号