首页> 外文期刊>IEEE Transactions on Neural Networks >Transputers and neural networks: an analysis of implementation constraints and performance
【24h】

Transputers and neural networks: an analysis of implementation constraints and performance

机译:晶片机和神经网络:对实施约束和性能的分析

获取原文
获取原文并翻译 | 示例

摘要

A performance analysis is presented that focuses on the achievable speedup of a neural network implementation and on the optimal size of a processor network (transputers or multicomputers that communicate in a comparable manner). For fully and randomly connected neural networks the topology of the processor network can only have a small, constant effect on the iteration time. With randomly connected neural networks, even severely limiting node fan-in has only a negligible effect on decreasing the communication overhead. The class of modular neural networks is studied as a separate case which is shown to have better implementation characteristics. On the basis of implementation constraints, it is argued that randomly connected neural networks cannot be realistic models of the brain.
机译:提出了一种性能分析,其重点放在神经网络实现的可实现的加速和处理器网络(以可比方式进行通信的晶片机或多计算机)的最佳大小上。对于完全随机连接的神经网络,处理器网络的拓扑结构只能对迭代时间产生很小的恒定影响。使用随机连接的神经网络,即使严格限制节点扇入,对减少通信开销的影响也可以忽略不计。模块化神经网络的类别作为一个单独的案例进行研究,被证明具有更好的实现特性。根据实施约束,有人认为随机连接的神经网络不能成为现实的大脑模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号