首页> 外文会议>IEEE International Solid- State Circuits Conference >7.6 A 65nm 236.5nJ/Classification Neuromorphic Processor with 7.5 Energy Overhead On-Chip Learning Using Direct Spike-Only Feedback
【24h】

7.6 A 65nm 236.5nJ/Classification Neuromorphic Processor with 7.5 Energy Overhead On-Chip Learning Using Direct Spike-Only Feedback

机译:7.6 65nm 236.5nJ /分类神经形态处理器,具有仅直接峰值反馈的7.5%的能量开销片上学习

获取原文

摘要

Advances in neural network and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance convolutional neural network (CNN) accelerators to energy-efficient deep-neural network (DNN) edge computing systems [1]. While most studies have focused on designing inference engines, recent works have shown that on-chip training could serve practical purposes such as compensating for process variations of in-memory computing [2] or adapting to changing environments in real time [3]. However, these successes were limited to relatively simple tasks mainly due to the large energy overhead of the training process. These problems arise primarily from the high-precision arithmetic and memory required for error propagation and weight updates, in contrast to error-tolerant inference operation; the capacity requirements of a learning system are significantly higher than those of an inference system [4].
机译:神经网络和机器学习算法的进步引发了对专用硬件的广泛研究,从高性能卷积神经网络(CNN)加速器到高效节能的深度神经网络(DNN)边缘计算系统[1]。虽然大多数研究都集中在设计推理引擎上,但最近的工作表明,片上训练可以达到实际目的,例如补偿内存计算的过程变化[2]或实时适应变化的环境[3]。然而,这些成功仅限于相对简单的任务,这主要是由于训练过程的大量能量开销。这些问题主要是由于误差传播和权重更新所需要的高精度算术和存储器而引起的,与容错推理操作相反。学习系统的容量需求明显高于推理系统[4]。

著录项

相似文献

  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号