首页> 外文会议>International Conference on Mechatronics and Information Technology; 20071205-06; Gifu(JP) >Implementation of neural network hardware based on a floating point operation in an FPGA
【24h】

Implementation of neural network hardware based on a floating point operation in an FPGA

机译:FPGA中基于浮点运算的神经网络硬件的实现

获取原文
获取原文并翻译 | 示例

摘要

This paper presents a hardware design and implementation of the radial basis function (RBF) neural network (NN) by the hardware description language. Due to its nonlinear characteristics, it is very difficult to implement for a system with integer-based operation. To develop nonlinear functions such sigmoid functions or exponential functions, floating point operations are required. The exponential function is designed based on the 32bit single-precision floating-point format. In addition, to update weights in the network, the back-propagation algorithm is also implemented in the hardware. Most operations are performed in the floating-point based arithmetic unit and accomplished sequentially by the instruction order stored in ROM. The NN is implemented and tested on the Altera FPGA "Cyclone2 EP2C70F672C8" for nonlinear classifications.
机译:本文通过硬件描述语言介绍了径向基函数(RBF)神经网络(NN)的硬件设计和实现。由于其非线性特性,对于基于整数的操作的系统很难实现。为了开发诸如S形函数或指数函数之类的非线性函数,需要浮点运算。指数函数是基于32位单精度浮点格式设计的。另外,为了更新网络中的权重,反向传播算法也在硬件中实现。大多数操作在基于浮点的算术单元中执行,并通过存储在ROM中的指令顺序依次完成。 NN是在Altera FPGA“ Cyclone2 EP2C70F672C8”上实现和测试的,用于非线性分类。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号