首页> 美国卫生研究院文献>Frontiers in Neuroscience >Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks
【2h】

Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks

机译:在二进制状态网络中通过流水线式截断错误反向传播进行硬件有效的在线学习

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.
机译:使用反向传播训练的人工神经网络(ANN)是功能强大的学习体系结构,已在各种基准中达到了最先进的性能。致力于开发定制的硅器件以加快ANN推理的力度。但是,加速培训阶段的关注相对较少。在本文中,我们描述了一种基于流水线反向传播的前馈多层ANN的硬件高效在线学习技术。学习与前向推理中的推理并行进行,从而无需显式的后向遍历,并且不需要额外的权重查找。通过在前馈网络中使用二进制状态变量以及在截断误差反向传播中使用三元错误,消除了前向和后向遍历中任何乘法的需要,并且大大减少了流水线的内存需求。由于前向神经和后向传播误差信号路径中的稀疏性,进一步减少了附加运算,从而有助于实现高效的硬件。为了进行概念验证,我们在与外部1Gb DDR2 DRAM接口的Spartan 6 FPGA上演示了MNIST手写数字分类的在线学习,与同等大小的二进制ANN训练后相比,显示出测试错误性能的小幅下降。线使用标准反向传播和精确误差。我们的结果凸显了流水线反向传播与二进制状态网络之间的一种有吸引力的协同作用,可以大幅减少计算和内存需求,从而使流水线在线学习在深度网络中变得切实可行。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号