首页> 外文OA文献 >Hardware Resource Analysis in Distributed Training with Edge Devices
【2h】

Hardware Resource Analysis in Distributed Training with Edge Devices

机译:边缘设备分布式训练中的硬件资源分析

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

When training a deep learning model with distributed training, the hardware resource utilization of each device depends on the model structure and the number of devices used for training. Distributed training has recently been applied to edge computing. Since edge devices have hardware resource limitations such as memory, there is a need for training methods that use hardware resources efficiently. Previous research focused on reducing training time by optimizing the synchronization process between edge devices or by compressing the models. In this paper, we monitored hardware resource usage based on the number of layers and the batch size of the model during distributed training with edge devices. We analyzed memory usage and training time variability as the batch size and number of layers increased. Experimental results demonstrated that, the larger the batch size, the fewer synchronizations between devices, resulting in less accurate training. In the shallow model, training time increased as the number of devices used for training increased because the synchronization between devices took more time than the computation time of training. This paper finds that efficient use of hardware resources for distributed training requires selecting devices in the context of model complexity and that fewer layers and smaller batches are required for efficient hardware use.
机译:当培训具有分布式训练的深度学习模型时,每个设备的硬件资源利用率取决于模型结构和用于训练的设备数量。分布式训练最近应用于边缘计算。由于边缘设备具有硬件资源限制,例如内存,因此需要有效地使用硬件资源的培训方法。以前的研究专注于通过优化边缘设备之间的同步过程或压缩模型来减少培训时间。在本文中,我们根据边缘设备的分布式训练期间,根据层数和模型的批次大小监测硬件资源使用。我们分析了内存使用和培训时间可变性,因为批量大小和层数增加。实验结果表明,批量尺寸越大,设备之间的同步越少,导致培训较少。在浅模型中,随着用于训练的设备数量增加的培训时间增加,因为设备之间的同步比计算时间训练的时间更多。本文发现,有效地利用用于分布式训练的硬件资源需要在模型复杂性的上下文中选择设备,并且需要更少的层和较小的批次来实现高效硬件使用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号