首页> 外文会议>IEEE/ACM Symposium on Edge Computing >Exploring Decentralized Collaboration in Heterogeneous Edge Training
【24h】

Exploring Decentralized Collaboration in Heterogeneous Edge Training

机译:在异构边缘训练中探索分散的合作

获取原文

摘要

Recent progress in deep learning techniques enabled collaborative edge training, which usually deploys identical neural network models globally on multiple devices for aggregating parameter updates over distributed data collection. However, as more and more heterogeneous edge devices are involved in practical training, the identical model deployment over collaborative edge devices cannot be guaranteed: On one hand, the weak edge devices with less computation resources may not catch up stronger ones’ training progress, and appropriate local model training customization is necessary to balance the collaboration. On the other hand, a particular local edge device may have specific learning task preference, while the global identical model would exceed the practical local demand and cause unnecessary computation cost. Therefore, we explored the collaborative learning with heterogeneous convolutional neural networks (CNNs) in this work, expecting to address aforementioned real problems. Specifically, we proposed a novel decentralized collaborative training method by decoupling a training target CNN model into independently trainable sub-models correspond to a sub-set of learning tasks for each edge device. After sub-models are well-trained on edge nodes, the model parameters for individual learning tasks can be harvested from local models on every edge device and ensemble the global training model back to a single piece. Experiments demonstrate that, for the AlexNet and VGG on the CIFAR10, CIFAR100 and KWS dataset, our decentralized training method can save up to 11.8× less computation load while achieve central sever test accuracy.
机译:最近的深度学习技术的进展使协作边缘训练能够在多个设备上全局部署相同的神经网络模型,以聚合参数更新在分布式数据收集上。然而,随着越来越多的异构边缘设备涉及实际培训,不能保证同一协作边缘设备的相同模型部署:一方面,具有较少计算资源的弱边缘设备可能不会赶上更强的培训进度,并且适当的本地模型培训定制以平衡合作。另一方面,特定的本地边缘设备可以具有特定的学习任务偏好,而全局相同的模型将超过实际的本地需求并导致不必要的计算成本。因此,我们在这项工作中探讨了与异构卷积神经网络(CNNS)的协同学习,期望解决上述真正的问题。具体地,我们提出了一种新颖的分散的协作训练方法,通过将训练目标CNN模型解耦为可独立培训的子模型对应于每个边缘设备的学习任务的子集。子模型在边缘节点上训练良好训练后,可以从每个边缘设备上的本地模型收获各个学习任务的模型参数,并将全球培训模型集合回单件。实验表明,对于CIFAR10,CIFAR100和KWS数据集的AlexNet和VGG,我们的分散训练方法可以节省高达11.8倍的计算负荷,同时实现中央切割测试精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号