首页> 外文期刊>Parallel Computing >A parallel algorithm for gradient training of feedforward neural networks
【24h】

A parallel algorithm for gradient training of feedforward neural networks

机译:前馈神经网络梯度训练的并行算法

获取原文
获取原文并翻译 | 示例
       

摘要

This paper presents a message-passing architecture simulating multilayer neural networks, adjusting its weights for each pair, consisting of an input vector and a desired output vector. First, the multilayer neural network is defined, and the difficulties arising from parallel implementation are clarified using Petri nets. Then the implementation of a neuron, split into the synapse and body, is proposed by arranging virtual processors in a cascaded torus topology. Mapping virtual processors onto node processors is done with the intention of minimizing external communication.
机译:本文提出了一种模拟多层神经网络的消息传递体系结构,调整了每对神经网络的权重,包括输入向量和期望的输出向量。首先,定义了多层神经网络,并使用Petri网阐明了并行实现产生的困难。然后,通过将虚拟处理器排列在级联的环形拓扑中,提出了将神经元分解为突触和身体的方法。将虚拟处理器映射到节点处理器是为了最大程度地减少外部通信。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号