首页> 外文期刊>IEEE Transactions on Neural Networks >Feedforward Neural Network Implementation in FPGA Using Layer Multiplexing for Effective Resource Utilization
【24h】

Feedforward Neural Network Implementation in FPGA Using Layer Multiplexing for Effective Resource Utilization

机译:使用层复用技术有效利用资源的FPGA前馈神经网络实现

获取原文
获取原文并翻译 | 示例

摘要

This paper presents a hardware implementation of multilayer feedforward neural networks (NN) using reconfigurable field-programmable gate arrays (FPGAs). Despite improvements in FPGA densities, the numerous multipliers in an NN limit the size of the network that can be implemented using a single FPGA, thus making NN applications not viable commercially. The proposed implementation is aimed at reducing resource requirement, without much compromise on the speed, so that a larger NN can be realized on a single chip at a lower cost. The sequential processing of the layers in an NN has been exploited in this paper to implement large NNs using a method of layer multiplexing. Instead of realizing a complete network, only the single largest layer is implemented. The same layer behaves as different layers with the help of a control block. The control block ensures proper functioning by assigning the appropriate inputs, weights, biases, and excitation function of the layer that is currently being computed. Multilayer networks have been implemented using Xilinx FPGA "XCV400hq240." The concept used is shown to be very effective in reducing resource requirements at the cost of a moderate overhead on speed. This implementation is proposed to make NN applications viable in terms of cost and speed for online applications. An NN-based flux estimator is implemented in FPGA and the results obtained are presented
机译:本文介绍了使用可重配置的现场可编程门阵列(FPGA)的多层前馈神经网络(NN)的硬件实现。尽管FPGA密度有所提高,但是NN中​​的众多乘法器限制了可以使用单个FPGA实施的网络的大小,因此使NN应用在商业上不可行。所提出的实现旨在减少资源需求,而又不对速度造成很大的影响,从而可以在单个芯片上以较低的成本实现更大的NN。本文利用神经网络中各层的顺序处理,通过层复用的方法来实现大型神经网络。不是实现完整的网络,而是仅实现最大的单个层。在控制块的帮助下,相同的层表现为不同的层。控制块通过分配当前正在计算的层的适当输入,权重,偏差和激励函数来确保适当的功能。多层网络已使用Xilinx FPGA“ XCV400hq240”实现。事实证明,所使用的概念在降低资源需求方面非常有效,但代价是需要适度的速度开销。提出该实现方式是为了使NN应用程序在在线应用程序的成本和速度方面可行。在FPGA中实现了基于神经网络的通量估计器,并给出了获得的结果

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号