首页> 外文会议>IEEE International Workshop on Signal Processing Systems >A Parallel RRAM Synaptic Array Architecture for Energy-Efficient Recurrent Neural Networks
【24h】

A Parallel RRAM Synaptic Array Architecture for Energy-Efficient Recurrent Neural Networks

机译:高效循环神经网络的并行RRAM突触阵列架构

获取原文
获取外文期刊封面目录资料

摘要

Recurrent neural networks (RNNs) provide excellent performance on applications with sequential data such as speech recognition. On-chip implementation of RNNs is difficult due to the significantly large number of parameters and computations. In this work, we first present a training method for LSTM model for language modeling on Penn Treebank dataset with binary weights and multi-bit activations and then map it onto a fully parallel RRAM array architecture ("XNOR-RRAM"). An energy-efficient XNOR-RRAM array based system for LSTM RNN is implemented and benchmarked on Penn Treebank dataset. Our results show that 4-bit activation precision can provide a near-optimal perplexity of 115.3 with an estimated energy-efficiency of ~27 TOPS/W.
机译:递归神经网络(RNN)在具有顺序数据(例如语音识别)的应用程序中提供出色的性能。由于大量的参数和计算,RNN的片上实现非常困难。在这项工作中,我们首先介绍一种针对LSTM模型的训练方法,该方法用于使用二进制权重和多位激活在Penn Treebank数据集上进行语言建模,然后将其映射到完全并行的RRAM阵列架构(“ XNOR-RRAM”)上。在Penn Treebank数据集上实现了基于能源的XNOR-RRAM阵列的LSTM RNN系统并对其进行了基准测试。我们的结果表明,4位激活精度可以提供115.3的近似最佳困惑,而估计的能量效率约为27 TOPS / W。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号