首页> 外文期刊>IEEE Journal on Selected Areas in Communications >DynaComm: Accelerating Distributed CNN Training Between Edges and Clouds Through Dynamic Communication Scheduling
【24h】

DynaComm: Accelerating Distributed CNN Training Between Edges and Clouds Through Dynamic Communication Scheduling

机译:DynaComm: Accelerating Distributed CNN Training Between Edges and Clouds Through Dynamic Communication Scheduling

获取原文
获取原文并翻译 | 示例
       

摘要

To reduce uploading bandwidth and address privacy concerns, deep learning at the network edge has been an emerging topic. Typically, edge devices collaboratively train a shared model using real-time generated data through the Parameter Server framework. Although all the edge devices can share the computing workloads, the distributed training processes over edge networks are still time-consuming due to the parameters and gradients transmission procedures between parameter servers and edge devices. Focusing on accelerating distributed Convolutional Neural Networks (CNNs) training at the network edge, we present DynaComm, a novel scheduler that dynamically decomposes each transmission procedure into several segments to achieve optimal layer-wise communications and computations overlapping during run-time. Through experiments, we verify that DynaComm manages to achieve optimal layer-wise scheduling for all cases compared to competing strategies while the model accuracy remains untouched.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号