首页> 外文会议> >Pipelining and parallel training of neural networks on distributed-memory multiprocessors
【24h】

Pipelining and parallel training of neural networks on distributed-memory multiprocessors

机译:分布式内存多处理器上神经网络的流水线和并行训练

获取原文

摘要

This paper presents a parallel neural network simulator, implemented on a Parsytec Multicluster2 transputer system. In practical use, neural networks often employ the backpropagation learning rule, as this supervised learning method can be applied to a wide field of recognition problems. The authors focus on the acceleration of backpropagation learning by combining pipelining and parallel training methods. The pipelining model was proposed by Klauer (1992), which actually is independent of the parallel hardware used. This contribution continues the idea of concurrency and pipelining by a concrete implementation.
机译:本文介绍了一种并行神经网络模拟器,该模拟器在Parsytec Multicluster2晶片机系统上实现。在实际应用中,神经网络通常采用反向传播学习规则,因为这种监督学习方法可以应用于广泛的识别问题。作者将流水线和并行训练方法相结合,致力于加速反向传播学习。流水线模型是由Klauer(1992)提出的,它实际上与所使用的并行硬件无关。该贡献通过具体的实现延续了并发和流水线的想法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号