首页> 外文期刊>IEEE transactions on circuits and systems . I , Regular papers >Recursive Binary Neural Network Training Model for Efficient Usage of On-Chip Memory
【24h】

Recursive Binary Neural Network Training Model for Efficient Usage of On-Chip Memory

机译:有效利用片上存储器的递归二进制神经网络训练模型

获取原文
获取原文并翻译 | 示例

摘要

We present a novel deep learning model for a neural network that reduces both computation and data storage overhead. To do so, the proposed model proposes and combines a binary-weight neural network (BNN) training, a storage reuse technique, and an incremental training scheme. The storage requirements can be tuned to meet the desired classification accuracy, storing more parameters on an on-chip memory, and thereby reducing off-chip data storage accesses. Our experiments show 4-6x reduction in weight storage footprint when training binary deep neural network models. On the FPGA platform, this results in a reduced amount of off-chip accesses, enabling our model to train a neural network in 14x shorter latency, as compared to the conventional BNN training method.
机译:我们提出了一种新颖的神经网络深度学习模型,该模型可减少计算和数据存储开销。为此,提出的模型提出并结合了二进制加权神经网络(BNN)训练,存储重用技术和增量训练方案。可以调整存储要求以满足所需的分类精度,将更多参数存储在片上存储器中,从而减少片外数据存储访问。我们的实验显示,在训练二元深度神经网络模型时,重量存储空间减少了4-6倍。在FPGA平台上,与传统的BNN训练方法相比,这减少了片外访问量,使我们的模型能够以14倍的较短延迟来训练神经网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号