...
首页> 外文期刊>IEEE transactions on very large scale integration (VLSI) systems >Exploiting Inherent Error Resiliency of Deep Neural Networks to Achieve Extreme Energy Efficiency Through Mixed-Signal Neurons
【24h】

Exploiting Inherent Error Resiliency of Deep Neural Networks to Achieve Extreme Energy Efficiency Through Mixed-Signal Neurons

机译:利用深层神经网络的固有错误恢复能力,通过混合信号神经元实现极高的能效

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Neuromorphic computing, inspired by the brain, promises extreme efficiency for certain classes of learning tasks, such as classification and pattern recognition. The performance and power consumption of neuromorphic computing depend heavily on the choice of the neuron architecture. Digital neurons (Dig-N) are conventionally known to be accurate and efficient at high speed while suffering from high leakage currents from a large number of transistors in a large design. On the other hand, analog/mixed-signal neurons (MS-Ns) are prone to noise, variability, and mismatch but can lead to extremely lowpower designs. In this paper, we will analyze, compare, and contrast existing neuron architectures with a proposed MS-N in terms of performance, power, and noise, thereby demonstrating the applicability of the proposed MS-N for achieving extreme energy efficiency (femtojoule/multiply and accumulate or less). The proposed MS-N is implemented in 65-nm CMOS technology and exhibits > 100x better energy efficiency across all frequencies over two traditional Dig-Ns synthesized in the same technology node. We also demonstrate that the inherent error resiliency of a fully connected or even convolutional neural network can handle the noise as well as the manufacturing nonidealities of the MS-N up to certain degrees. Notably, a system-level implementation on CIFAR-10 data set exhibits a worst case increase in classification error by 2.1% when the integrated noise power in the bandwidth is similar to 0.1 mu V2, along with +/- 3 sigma amount of variation and mismatch introduced in the transistor parameters for the proposed neuron with 8-bit precision.
机译:受大脑启发的神经形态计算保证了某些类别的学习任务(例如分类和模式识别)的极高效率。神经形态计算的性能和功耗在很大程度上取决于神经元体系结构的选择。传统上已知数字神经元(Dig-N)在高速下是准确而有效的,同时在大型设计中会遭受来自大量晶体管的高泄漏电流的困扰。另一方面,模拟/混合信号神经元(MS-Ns)容易产生噪声,可变性和失配,但会导致功耗极低。在本文中,我们将在性能,功率和噪声方面分析,比较和对比现有的神经元架构与拟议的MS-N,从而证明拟议的MS-N可以实现极高的能源效率(毫焦耳/乘积)并累积或更少)。拟议中的MS-N采用65纳米CMOS技术实现,与在同一技术节点中合成的两个传统Dig-N相比,在所有频率上的能效均提高了100倍以上。我们还证明,完全连接甚至卷积神经网络的固有错误恢复能力可以在一定程度上处理噪声以及MS-N的制造非理想性。值得注意的是,当带宽中的集成噪声功率类似于0.1μV2时,在CIFAR-10数据集上进行的系统级实现将分类错误的最坏情况增加2.1%,并且变化量为+/- 3 sigma。拟议神经元的晶体管参数中引入了8位精度的不匹配。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号