首页> 外文期刊>IEEE Transactions on Circuits and Systems. II >Implementation of neural networks on massive memory organizations
【24h】

Implementation of neural networks on massive memory organizations

机译:神经网络在海量内存组织上的实现

获取原文
获取原文并翻译 | 示例
           

摘要

Simulations of artificial neural networks (ANNs) on serial machines have proved to be too slow to be of practical significance. It was realized that parallel machines would have to be used to exploit the inherent parallelism in these models. The SIMD architecture presented has n PEs and n/sup 2/ memory modules arranged in an n*n array. This massive memory is used to store the weights of the neural network being simulated. It is shown how networks with sparse connectivity among neurons can be simulated in O((n+e)/sup 1/2/) time, where n is the number of neurons and e the number of interconnections in the network. Preprocessing is carried out on the connection matrix of the sparse network, resulting in data movement that has an optimal asymptotic time complexity and a small constant factor.
机译:事实证明,在串行机器上进行人工神经网络(ANN)的仿真速度太慢,没有实际意义。已经意识到,必须使用并行机来利用这些模型中固有的并行性。提出的SIMD架构具有n个PE和n / sup 2 /存储模块,排列成n * n阵列。该大量内存用于存储要模拟的神经网络的权重。它显示了如何在O((n + e)/ sup 1/2 /)时间中模拟神经元之间稀疏连接的网络,其中n是神经元的数量,e是网络中的互连数量。在稀疏网络的连接矩阵上执行预处理,导致数据移动具有最佳的渐近时间复杂度和较小的常数因子。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号