首页> 外文会议>International Joint Conference on Neural Networks >Socrates-D 2.0: A Low Power High Throughput Architecture for Deep Network Training
【24h】

Socrates-D 2.0: A Low Power High Throughput Architecture for Deep Network Training

机译:Socrates-D 2.0:用于深度网络培训的低功耗高吞吐量架构

获取原文

摘要

Specialized ultra-low power deep learning architectures with on-chip training capability can be useful in variety of applications that require adaptability. This paper presents such a processor design, Socrates-D 2.0, a multicore architecture for deep neural network based training and inference. The architecture consists of a set of processing cores, each with internal memories to store synaptic weights. Additionally, we present a method to map traditional deep learning networks to our multicore architecture and show that there is minimal impact in training accuracy. The system level area and power benefits of the specialized architecture are compared with the earlier generation of Socrates-D. Our experimental evaluations show that the proposed architecture can provide 1.25× area and 1.19× energy efficiency than the previous version of Socrates-D.
机译:具有片上训练功能的专门的超低功耗深度学习架构可用于需要适应性的各种应用中。本文介绍了这样的处理器设计Socrates-D 2.0,一种用于基于深度神经网络的训练和推理的多核体系结构。该体系结构由一组处理核心组成,每个处理核心都具有用于存储突触权重的内部存储器。此外,我们提出了一种将传统深度学习网络映射到我们的多核体系结构的方法,并表明对培训准确性的影响最小。将专用架构的系统级面积和功耗优势与早期的Socrates-D进行了比较。我们的实验评估表明,与先前版本的Socrates-D相比,所提出的体系结构可提供1.25倍的面积和1.19倍的能源效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号