首页> 外文会议>High Performance Computing Symposium >Comparative study of message passing and shared memory parallel programming models in neural network training
【24h】

Comparative study of message passing and shared memory parallel programming models in neural network training

机译:神经网络训练中消息传递与共享内存并行编程模型的比较研究

获取原文
获取外文期刊封面目录资料

摘要

It is presented a comparative performance study of a coarse grained parallel neural network training code, implemented in both OpenMP and MPI, standards for shared memory and message passing parallel programming environments, respectively. In addition, these versions of the parallel training code are compared to an implementation utilizing SHMEM the native SGI/CRAY environment for shared memory programming. The multiprocessor platform used is a SGI/Cray Origin 2000 with up to 32 processors. It is shown that in this study, the native CRAY environment out-performs MPI for the entire range of processors used, while OpenMP shows better performance than the other two environments when using more than 19 processors. In this study, the efficiency is always greater than 60% regardless of the parallel programming environment used as well as of the number of processors.
机译:介绍了在OpenMP和MPI的粗粒并行神经网络训练代码中进行了比较绩效研究,分别在OpenMP和MPI,共享内存标准和通过并行编程环境中的标准中实现。此外,这些版本的并行训练代码与利用Shmem本机SGI / CRAY环境的实现进行比较,用于共享存储器编程。使用的多处理器平台是SGI / Cray Origin 2000,最多32个处理器。结果表明,在本研究中,本机Cray环境为所使用的整个处理器提供MPI,而OpenMP在使用超过19个处理器时比其他两个环境显示出更好的性能。在这项研究中,无论使用的并行编程环境以及处理器数量,效率始终大于60%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号