首页> 外文会议>IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits >Exploring RRAM Variability as Synapses on Inception Simulation Framework to Characterize the Prediction Accuracy and Power Estimation per Bit for Convolution Neural Network
【24h】

Exploring RRAM Variability as Synapses on Inception Simulation Framework to Characterize the Prediction Accuracy and Power Estimation per Bit for Convolution Neural Network

机译:以RRAM变异性作为突触模拟框架上的突触来表征卷积神经网络的预测精度和每位功率估计

获取原文
获取外文期刊封面目录资料

摘要

Resistive random access memory (RRAM) is one of the most preferred candidates for implementation of hardware-based neural networks for future edge computing applications given its analog conductance behavior (depending on the material stack), ease of integration with the Si CMOS process, relatively lower power consumption (compared to traditional bulk phase change RAM devices) and high integration density. The matrix convolution operation is an inevitable process in a convolutional neural network (CNN) due to its robust approximation to learn and classify the non-linear relationship between the input and output data sets. Today, most of the popular CNNs are stacked with more and more convolution layers (referred to as deep learning) to improve performance, but in many instances, this approach tends to over fit the data, resulting in prediction accuracy loss. The Inception network was an essential milestone in the development of CNN classifiers. The Inception layer is constructed with parallel pipelines of convolution operators, which resulted in improved performance. While several studies have focused on the quantification of the impact of RRAM degradation on the prediction accuracy for pattern classification / image recognition in a simple arbitrary neural network with one or two hidden layers, the impact of these hardware variations on a full-fledged convolutional neural network (CNN) that is commercially used is not well explored. In this study, we extract the GPU trained weights of the CNN platform for visual recognition and replace the GPU weights with the RRAM resistance data in the floating-point format for the Inception network layers alone to quantify the impact of “partial” hardware-based CNN prediction accuracy.
机译:考虑到电阻随机存取存储器(RRAM)的模拟电导行为(取决于材料堆栈),相对于Si CMOS工艺的集成相对容易,它是用于将来的边缘计算应用的基于硬件的神经网络的最优选的候选人之一。较低的功耗(与传统的批量相变RAM器件相比)和高集成度。矩阵卷积运算在卷积神经网络(CNN)中是不可避免的过程,因为它的鲁棒近似性可用于学习和分类输入和输出数据集之间的非线性关系。如今,大多数流行的CNN都堆叠有越来越多的卷积层(称为深度学习)以提高性能,但是在许多情况下,这种方法往往会过度拟合数据,从而导致预测精度下降。 Inception网络是CNN分类器发展中的重要里程碑。 Inception层由卷积运算符的并行管线构成,从而提高了性能。虽然有几项研究集中于量化具有一个或两个隐藏层的简单任意神经网络中RRAM退化对模式分类/图像识别的预测精度的影响的量化,但这些硬件变化对成熟的卷积神经的影响商业使用的网络(CNN)尚未得到很好的探索。在本研究中,我们提取了CNN平台的GPU训练权重以进行视觉识别,并仅将Inception网络层的浮点格式的RRAM电阻数据替换为GPU权重,以量化基于“部分”硬件的影响CNN预测准确性。

著录项

相似文献

  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号