首页> 外文会议>Asia and South Pacific Design Automation Conference >Binary convolutional neural network on RRAM
【24h】

Binary convolutional neural network on RRAM

机译:RRAM上的二进制卷积神经网络

获取原文

摘要

Recent progress in the machine learning field makes low bit-level Convolutional Neural Networks (CNNs), even CNNs with binary weights and binary neurons, achieve satisfying recognition accuracy on ImageNet dataset. Binary CNNs (BCNNs) make it possible for introducing low bit-level RRAM devices and low bit-level ADC/DAC interfaces in RRAM-based Computing System (RCS) design, which leads to faster read-and-write operations and better energy efficiency than before. However, some design challenges still exist: (1) how to make matrix splitting when one crossbar is not large enough to hold all parameters of one layer; (2) how to design the pipeline to accelerate the whole CNN forward process. In this paper, an RRAM crossbar-based accelerator is proposed for BCNN forward process. Moreover, the special design for BCNN is well discussed, especially the matrix splitting problem and the pipeline implementation. In our experiment, BCNNs on RRAM show much smaller accuracy loss than multi-bit CNNs for LeNet on MNIST when considering device variation. For AlexNet on ImageNet, the RRAM-based BCNN accelerator saves 58.2% energy consumption and 56.8% area compared with multi-bit CNN structure.
机译:机器学习领域的最新进展使得低位级卷积神经网络(CNN)甚至具有二进制权重和二进制神经元的CNN都能在ImageNet数据集上达到令人满意的识别精度。二进制CNN(BCNN)使在基于RRAM的计算系统(RCS)设计中引入低位RRAM器件和低位ADC / DAC接口成为可能,这导致更快的读写操作和更高的能源效率比以前。但是,仍然存在一些设计挑战:(1)当一个交叉开关的大小不足以容纳一层的所有参数时,如何进行矩阵分割; (2)如何设计管道以加速整个CNN转发过程。本文提出了一种基于RRAM交叉开关的BCNN前向加速器。此外,对BCNN的特殊设计进行了很好的讨论,尤其是矩阵拆分问题和流水线实现。在我们的实验中,考虑到设备变化时,RRAM上的BCNN与MNIST上LeNet的多位CNN相比,精度损失要小得多。对于ImageNet上的AlexNet,与多位CNN结构相比,基于RRAM的BCNN加速器可节省58.2%的能耗和56.8%的面积。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号