首页> 外文会议>2018 55th ACM/ESDA/IEEE Design Automation Conference >DPS: Dynamic Precision Scaling for Stochastic Computing-based Deep Neural Networks*
【24h】

DPS: Dynamic Precision Scaling for Stochastic Computing-based Deep Neural Networks*

机译:DPS:基于随机计算的深度神经网络的动态精度缩放 *

获取原文
获取原文并翻译 | 示例

摘要

Stochastic computing (SC) is a promising technique with advantages such as low-cost, low-power, and error-resilience. However so far SC-based CNN (convolutional neural network) accelerators have been kept to relatively small CNNs only, primarily due to the inherent precision disadvantage of SC. At the same time, previous SC architectures do not exploit the dynamic precision capability, which can be crucial in providing efficiency as well as flexibility in SC-CNN implementations. In this paper we present a DPS (dynamic precision scaling) SC-CNN that is able to exploit dynamic precision with very low overhead, along with the design methodology for it. Our experimental results demonstrate that our DPS SC-CNN is highly efficient and accurate up to ImageNet-targeting CNNs, and show efficiency improvements over conventional digital designs ranging in 50~00% in operations-per-area depending on the DNN and the application scenario, while losing less than 1% in recognition accuracy.
机译:随机计算(SC)是一种有前途的技术,具有诸如低成本,低功耗和防错能力等优点。但是,到目前为止,基于SC的CNN(卷积神经网络)加速器仅被保留到相对较小的CNN,这主要是由于SC固有的精度缺点。同时,以前的SC架构没有利用动态精度功能,这对于在SC-CNN实现中提供效率和灵活性至关重要。在本文中,我们提出了一种DPS(动态精度缩放)SC-CNN,它能够以非常低的开销利用动态精度,并提供了其设计方法。我们的实验结果表明,我们的DPS SC-CNN能够高效,准确地达到以ImageNet为目标的CNN,并且根据DNN和应用场景,显示出比传统数字设计的效率提高(每区域操作范围为50%至00%) ,而识别精度损失不到1%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号