首页> 外文期刊>Neural Computing & Applications >Neural network training based on FPGA with floating point number format and it’s performance
【24h】

Neural network training based on FPGA with floating point number format and it’s performance

机译:基于带有浮点数格式的FPGA的神经网络训练及其性能

获取原文
获取原文并翻译 | 示例

摘要

In this paper, two-layered feed forward artificial neural network’s (ANN) training by back propagation and its implementation on FPGA (field programmable gate array) using floating point number format with different bit lengths are remarked based on EX-OR problem. In the study, being suitable with the parallel data-processing specification on ANN’s nature, it is especially ensured to realize ANN training operations parallel over FPGA. On the training, Virtex2vp30 chip of Xilinx FPGA family is used. The network created on FPGA is coded by using VHDL. By comparing the results to available literature, the technique developed here proved to consume less space for the subjected ANN training which has the same structure and bit length, it is shown to have better performance.
机译:本文基于EX-OR问题,对通过反向传播进行的两层前馈人工神经网络(ANN)训练及其在FPGA(现场可编程门阵列)上的实现进行了说明,该浮点数格式使用不同位长的浮点数格式。在这项研究中,由于它适合于ANN本质上的并行数据处理规范,因此特别确保可以在FPGA上并行实现ANN训练操作。在培训中,使用了Xilinx FPGA系列的Virtex2vp30芯片。使用VHDL对在FPGA上创建的网络进行编码。通过将结果与现有文献进行比较,事实证明,本文开发的技术在进行相同结构和位长度的ANN训练时消耗较少的空间,因此具有更好的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号