首页> 外文会议>IEEE International Symposium on High Performance Computer Architecture >Making Memristive Neural Network Accelerators Reliable
【24h】

Making Memristive Neural Network Accelerators Reliable

机译:使忆阻神经网络加速器可靠

获取原文

摘要

Deep neural networks (DNNs) have attracted substantial interest in recent years due to their superior performance on many classification and regression tasks as compared to other supervised learning models. DNNs often require a large amount of data movement, resulting in performance and energy overheads. One promising way to address this problem is to design an accelerator based on in-situ analog computing that leverages the fundamental electrical properties of memristive circuits to perform matrix-vector multiplication. Recent work on analog neural network accelerators has shown great potential in improving both the system performance and the energy efficiency. However, detecting and correcting the errors that occur during in-memory analog computation remains largely unexplored. The same electrical properties that provide the performance and energy improvements make these systems especially susceptible to errors, which can severely hurt the accuracy of the neural network accelerators. This paper examines a new error correction scheme for analog neural network accelerators based on arithmetic codes. The proposed scheme encodes the data through multiplication by an integer, which preserves addition operations through the distributive property. Error detection and correction are performed through a modulus operation and a correction table lookup. This basic scheme is further improved by data-aware encoding to exploit the state dependence of the errors, and by knowledge of how critical each portion of the computation is to overall system accuracy. By leveraging the observation that a physical row that contains fewer 1s is less susceptible to an error, the proposed scheme increases the effective error correction capability with less than 4.5% area and less than 4.7% energy overheads. When applied to a memristive DNN accelerator performing inference on the MNIST and ILSVRC-2012 datasets, the proposed technique reduces the respective misclassification rates by 1.5x and 1.1x.
机译:近年来,由于深层神经网络(DNN)与其他监督学习模型相比在许多分类和回归任务上的优越性能,引起了人们的极大兴趣。 DNN通常需要大量的数据移动,从而导致性能和能源开销。解决该问题的一种有前途的方法是设计一种基于原位模拟计算的加速器,该加速器利用忆阻电路的基本电特性来执行矩阵矢量乘法。关于模拟神经网络加速器的最新工作显示出在改善系统性能和能源效率方面的巨大潜力。但是,检测和纠正在内存中模拟计算过程中发生的错误的可能性仍然很大。提供性能和能量改善的相同电特性使这些系统特别容易出错,这可能严重损害神经网络加速器的准确性。本文研究了一种新的基于算术代码的模拟神经网络加速器纠错方案。所提出的方案通过乘以整数对数据进行编码,从而通过分布属性保留加法运算。通过模运算和校正表查找来执行错误检测和校正。通过利用数据感知编码来利用错误的状态依赖性,以及通过了解计算的每个部分对于整体系统精度有多关键,可以进一步改进此基本方案。通过利用这样的观察,即包含较少1s的物理行不太容易出错,所提出的方案以小于4.5%的面积和小于4.7%的能量开销提高了有效的纠错能力。当应用于对MNIST和ILSVRC-2012数据集进行推理的忆阻DNN加速器时,所提出的技术将相应的误分类率降低了1.5倍和1.1倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号