首页> 外文会议>International conference on algorithms and architectures for parallel processing >Understanding the Resource Demand Differences of Deep Neural Network Training
【24h】

Understanding the Resource Demand Differences of Deep Neural Network Training

机译:了解深度神经网络培训的资源需求差异

获取原文

摘要

More deep neural networks (DNN) are deployed in the real world, while the heavy computing demand becomes an obstacle. In this paper, we analyze the resource demand differences of DNN training and help understand its performance characteristic. In detail, we study both shared-memory and message-passing behavior in distributed DNN training from layer-level and model-level perspectives. From layer-level perspective, we evaluate and compare basic layers' resource demand. From model-level perspective, we measure parallel training of representative models then explain the causes of performance differences based on their structures. Experimental results reveal that different models vary in resource demand and even a model can have very different resource demand with different input sizes. Further, we give out some observations and recommendations on performance improvement of on-chip training and parallel training.
机译:更深入的神经网络(DNN)部署在现实世界中,而重型计算需求成为障碍。在本文中,我们分析了DNN培训的资源需求差异,并有助于了解其性能特征。详细地,我们在从层级和模型级透视图中研究了分布式DNN培训中的共享内存和消息传递行为。从层次级的角度来看,我们评估并比较基本层资源需求。从模型级的角度来看,我们测量代表性模型的并行培训,然后根据其结构解释性能差异的原因。实验结果表明,不同的模型在资源需求中变化,甚至模型都可以具有不同的输入尺寸的资源需求非常不同。此外,我们阐明了一些关于芯片培训和平行培训的绩效改进的观察和建议。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号