首页> 外文期刊>Circuits and Systems II: Express Briefs, IEEE Transactions on >A ReRAM-Based Computing-in-Memory Convolutional-Macro With Customized 2T2R Bit-Cell for AIoT Chip IP Applications
【24h】

A ReRAM-Based Computing-in-Memory Convolutional-Macro With Customized 2T2R Bit-Cell for AIoT Chip IP Applications

机译:基于RERAM的计算内存卷积,具有用于防充气芯片IP应用的定制的2T2R位单元

获取原文
获取原文并翻译 | 示例

摘要

To reduce the energy-consuming and time latency incurred by Von Neumann architecture, this brief developed a complete computing-in-memory (CIM) convolutional macro based on ReRAM array for the convolutional layers of a LeNet-like convolutional neural network (CNN). We binarized the input layer and the first convolutional layer to get higher accuracy. The proposed ReRAM-CIM convolutional macro is suitable as an IP core for any binarized neural networks' convolutional layers. This brief customized a bit-cell consisting of 2T2R ReRAM cells, regarded 9 x 8 bit-cells as one unit to achieve high hardware compute accuracy, great read/compute speed, and low power consuming. The ReRAM-CIM convolutional macro achieved 50 ns product-sum computing time for one complete convolutional operation in a convolutional layer in the customized CNN, with an accuracy of 96.96% on MNIST database and a peak energy efficiency of 58.82 TOPS/W.
机译:为了减少冯Neumann架构产生的能耗和时间延迟,这简要开发了基于LENET样卷积神经网络(CNN)的卷积层的RERAM阵列的完整计算内存(CIM)卷积宏。我们二进制化输入层和第一个卷积层以获得更高的精度。所提出的Reram-CIM卷积宏适用于任何二值化神经网络的卷积层的IP核心。本简要定制了由2T2R reram单元组成的位单元,将9 x 8位单元视为一个单元,以实现高硬件计算精度,较大的读/计算速度和低功耗。 RERAM-CIM卷积宏实现了50 ns的产品 - 总和计算时间,用于定制CNN中的卷积层中的一个完整的卷积操作,精度为MNIST数据库的96.96%,峰值能量效率为58.82顶部/次。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号