首页> 外文会议>International Conference on Microelectronics for Neural Networks >A digital neural network LSI using sparse memory access architecture
【24h】

A digital neural network LSI using sparse memory access architecture

机译:一种使用稀疏内存访问架构的数字神经网络LSI

获取原文

摘要

A sparse memory access architecture is proposed to achieve a high-computational-speed neural network LSI. The architecture uses two key techniques, compressible synapse weight neuron calculation and differential neuron operation, to reduce the number of accesses to synapse weight memories and the number of neurons. Calculations without an accuracy penalty. In a pattern recognition example, the number of memory accesses and neuron calculations are reduced to 0.87% of that in the conventional method and the practical performance is 18 GCPS.
机译:提出了一种稀疏的存储器访问架构以实现高计算速度神经网络LSI。该架构采用两种关键技术,可压缩突触重量神经元计算和微分神经元操作,以减少对突触重量存储器的访问数量和神经元数。没有准确罚款的计算。在图案识别示例中,在传统方法中,存储器访问和神经元计算的数量减少到0.87%,并且实际性能为18个GCP。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号