首页> 外文期刊>Circuits and Systems II: Express Briefs, IEEE Transactions on >Hardware Implementation of an Improved Stochastic Computing Based Deep Neural Network Using Short Sequence Length
【24h】

Hardware Implementation of an Improved Stochastic Computing Based Deep Neural Network Using Short Sequence Length

机译:基于简序长度的改进的随机计算的深神经网络的硬件实现

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

By introducing stochastic computing (SC), the hardware consumption of Deep Neural Network (DNN) can be reduced significantly. However, long SC sequence length is needed to maintain the accuracy, which results in extended computation time. To shorten the sequence length, we propose two high-accuracy SC computation units to improve the precision of SC-DNN, which are the rematching-based correlation-independent multiplier and the accumulator-based Rectified Linear Unit. Moreover, a length-adaptive method that uses variable lengths for different images is adopted to decrease the average sequence length. Software simulation shows that the SC design has just 0.01% accuracy loss with only the sequence length of 20 for MNIST dataset comparing to the binary system. The ASIC layout results demonstrate the area efficiency is improved by 16X comparing to the latest SC-DNN with similar accuracy loss.
机译:通过引入随机计算(SC),可以显着降低深神经网络(DNN)的硬件消耗。然而,需要长SC序列长度来保持精度,这导致扩展计算时间。为了缩短序列长度,我们提出了两个高精度的SC计算单元,以提高SC-DNN的精度,这是基于重复的相关无关乘法器和基于累加器的整流线性单元。此外,采用了使用用于不同图像的可变长度的长度自适应方法来降低平均序列长度。软件仿真表明,SC设计仅具有0.01%的精度损耗,仅用与二进制系统的MNIST数据集的序列长度为20。与具有类似精度损耗的最新SC-DNN比较,ASIC布局结果证明了区域效率提高了16倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号