首页> 外文期刊>IEEE Journal on Selected Areas in Communications >Coded Computing for Low-Latency Federated Learning Over Wireless Edge Networks
【24h】

Coded Computing for Low-Latency Federated Learning Over Wireless Edge Networks

机译:用于低延迟联合学习的编码计算通过无线边缘网络

获取原文
获取原文并翻译 | 示例
           

摘要

Federated learning enables training a global model from data located at the client nodes, without data sharing and moving client data to a centralized server. Performance of federated learning in a multi-access edge computing (MEC) network suffers from slow convergence due to heterogeneity and stochastic fluctuations in compute power and communication link qualities across clients. We propose a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure. CodedFedL enables coded computing for non-linear federated learning by efficiently exploiting distributed kernel embedding via random Fourier features that transforms the training task into computationally favourable distributed linear regression. Furthermore, clients generate local parity datasets by coding over their local datasets, while the server combines them to obtain the global parity dataset. Gradient from the global parity dataset compensates for straggling gradients during training, and thereby speeds up convergence. For minimizing the epoch deadline time at the MEC server, we provide a tractable approach for finding the amount of coding redundancy and the number of local data points that a client processes during training, by exploiting the statistical properties of compute as well as communication delays. We also characterize the leakage in data privacy when clients share their local parity datasets with the server. Additionally, we analyze the convergence rate and iteration complexity of CodedFedL under simplifying assumptions, by treating CodedFedL as a stochastic gradient descent algorithm. Finally, for demonstrating gains that CodedFedL can achieve in practice, we conduct numerical experiments using practical network parameters and benchmark datasets, in which CodedFedL speeds up the overall training time by up to $15imes $ in comparison to the benchmark schemes.
机译:联合学习使得能够从位于客户端节点的数据中培训全局模型,而无需数据共享并将客户端数据移动到集中式服务器。由于客户端的计算能力和通信链路质量中的异质性和随机波动,在多访问边缘计算(MEC)网络中联合学习的性能遭受了缓慢的收敛性。我们提出了一种新颖的<斜视XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”>编码计算< /斜体>框架,CodedFedl,将结构化编码冗余注入联合学习,以减轻跨统计数据并加快培训程序。 CodedFedl通过有效地利用<斜体XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org,使得对非线性联合学习进行编码计算/ 1999 / xlink“>通过随机傅里叶功能将培训任务转换为计算上有利的分布式线性回归的随机傅里叶功能,分布式内核嵌入。此外,客户端生成<斜斜体xmlns:mml =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”> local parity < /斜体>数据集通过编码本地数据集,而服务器将它们组合以获取数据集。来自全局奇偶校验数据集的渐变补偿训练期间跨梯度,从而加速会聚。为了最大限度地减少<斜体XMLNS:mml =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”> epoch截止日期时间< /斜体>在MEC服务器上,我们通过利用计算和通信延迟的统计属性,提供一种用于查找编码冗余的数量和客户端进程的本地数据点数的遗传方法。当客户端与服务器共享本地奇偶校验数据集时,我们还表征了数据隐私中的泄漏。另外,通过将CopedFedl作为随机梯度下降算法将CoceDFEDL处理,分析了简化假设下的CodedFED1的收敛速率和迭代复杂性。最后,为了说明CodedFedl可以在实践中实现的增益,我们使用实用的网络参数和基准数据集进行数值实验,其中CodedFedl将整体训练时间加速到<内联公式XMLNS:MML =“http:// www .w3.org / 1998 / math / mathml“xmlns:xlink =”http://www.w3.org/1999/xlink“> $ 15 times $ 与基准方案相比。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号