首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Efficient Computation Reduction in Bayesian Neural Networks Through Feature Decomposition and Memorization
【24h】

Efficient Computation Reduction in Bayesian Neural Networks Through Feature Decomposition and Memorization

机译:通过特征分解和记忆贝叶斯神经网络的高效计算减少

获取原文
获取原文并翻译 | 示例
           

摘要

The Bayesian method is capable of capturing real-world uncertainties/incompleteness and properly addressing the overfitting issue faced by deep neural networks. In recent years, Bayesian neural networks (BNNs) have drawn tremendous attention to artificial intelligence (AI) researchers and proved to be successful in many applications. However, the required high computation complexity makes BNNs difficult to be deployed in computing systems with a limited power budget. In this article, an efficient BNN inference flow is proposed to reduce the computation cost and then is evaluated using both software and hardware implementations. A feature decomposition and memorization (DM) strategy is utilized to reform the BNN inference flow in a reduced manner. About half of the computations could be eliminated compared with the traditional approach that has been proved by theoretical analysis and software validations. Subsequently, in order to resolve the hardware resource limitations, a memory-friendly computing framework is further deployed to reduce the memory overhead introduced by the DM strategy. Finally, we implement our approach in Verilog and synthesize it with a 45-nm FreePDK technology. Hardware simulation results on multilayer BNNs demonstrate that, when compared with the traditional BNN inference method, it provides an energy consumption reduction of 73% and a 4x speedup at the expense of 14% area overhead.
机译:贝叶斯方法能够捕获真实的不确定性/不完整性,并适当地解决深神经网络面临的过度拟合问题。近年来,贝叶斯神经网络(BNNS)对人工智能(AI)的研究人员来说,并证明是在许多应用中取得的成功。然而,所需的高计算复杂性使得BNN难以在具有有限的电力预算的计算系统中部署。在本文中,提出了一种有效的BNN推理流程以减少计算成本,然后使用软件和硬件实现来评估计算成本。特征分解和记忆(DM)策略用于以降低的方式改变BNN推理流程。与由理论分析和软件验证证明的传统方法相比,可以消除大约一半的计算。随后,为了解决硬件资源限制,进一步部署内存友好的计算框架以减少DM策略引入的存储器开销。最后,我们在Verilog中实施了我们的方法,并用45 nm freepdk技术综合。多层BNNS的硬件仿真结果表明,与传统的BNN推理方法相比,它在14%面积的面积开销时提供73%的能耗减少73%和4倍的加速。

著录项

  • 来源
  • 作者单位

    Beihang Univ Fert Beijing Res Inst BDBC Sch Microelect Beijing 100191 Peoples R China|Beihang Univ Qingdao Res Inst Beihang Goertek Joint Microelect Inst Qingdao 266101 Peoples R China;

    Beihang Univ Fert Beijing Res Inst BDBC Sch Comp Sci & Engn Beijing 100191 Peoples R China;

    Beihang Univ Fert Beijing Res Inst BDBC Sch Comp Sci & Engn Beijing 100191 Peoples R China;

    Beihang Univ Fert Beijing Res Inst BDBC Sch Microelect Beijing 100191 Peoples R China|Beihang Univ Qingdao Res Inst Beihang Goertek Joint Microelect Inst Qingdao 266101 Peoples R China;

    Delft Univ Technol Fac Elect Engn Math & Comp Sci NL-2628 CD Delft Netherlands;

    Beihang Univ Fert Beijing Res Inst BDBC Sch Microelect Beijing 100191 Peoples R China|Beihang Univ Qingdao Res Inst Beihang Goertek Joint Microelect Inst Qingdao 266101 Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Bayesian neural network (BNN); computation reduction; feature decomposition; memory reduction;

    机译:贝叶斯神经网络(BNN);计算减少;特征分解;记忆力;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号