首页> 外文期刊>IEEE Transactions on Neural Networks >Massively parallel architectures for large scale neural network simulations
【24h】

Massively parallel architectures for large scale neural network simulations

机译:用于大规模神经网络仿真的大规模并行架构

获取原文
获取原文并翻译 | 示例

摘要

A toroidal lattice architecture (TLA) and a planar lattice architecture (PLA) are proposed as massively parallel neurocomputer architectures for large-scale simulations. The performance of these architectures is almost proportional to the number of node processors, and they adopt the most efficient two-dimensional processor connections for WSI implementation. They also give a solution to the connectivity problem, the performance degradation caused by the data transmission bottleneck, and the load balancing problem for efficient parallel processing in large-scale neural network simulations. The general neuron model is defined. Implementation of the TLA with transputers is described. A Hopfield neural network and a multilayer perceptron have been implemented and applied to the traveling salesman problem and to identity mapping, respectively. Proof that the performance increases almost in proportion to the number of node processors is given.
机译:提出了环形格构架(TLA)和平面格构架(PLA)作为大规模并行神经计算机体系结构,用于大规模仿真。这些体系结构的性能几乎与节点处理器的数量成正比,并且它们采用最有效的二维处理器连接来实现WSI。它们还为大规模神经网络仿真中的有效并行处理提供了连通性问题,数据传输瓶颈导致的性能下降以及负载平衡问题的解决方案。定义了一般的神经元模型。描述了具有晶片机的TLA的实现。 Hopfield神经网络和多层感知器已经实现,分别应用于旅行商问题和身份映射。给出了性能几乎与节点处理器数量成正比的证明。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号