首页> 外文学位 >Flexible basis function neural networks for efficient analog implementations.
【24h】

Flexible basis function neural networks for efficient analog implementations.

机译:灵活的基函数神经网络,可实现高效的模拟实现。

获取原文
获取原文并翻译 | 示例

摘要

Radial Basis Function Network is one well-known structure used for implementing static mapping in learning systems. It uses a set of radial basis functions, each dominating a local input area. One drawback is that the number of basis function components could increase abruptly when the input dimension increases. This problem could be made easy if the used basis function is not restricted to only one specific form such as the Gaussian function. The flexibility can be achieved using the Sum-of-Products Neural Network (SOPNN) structure. This study investigates two alternatives of the SOPNN for possible efficient hardware realization; both intend to eliminate the high cost multiplication. The first structure, named the Minimum-Sum Network, uses the minimum function to approximate the multiplication. The second structure, named the Sum-Exponential-Sum Network, uses the sum of logarithmic values for computing a product (i.e., a logarithmic multiplier). A logarithmic multiplier is an adder, which can efficiently handle the multiplication of many input values at one time. Learning rules have been derived and learning algorithms have been implemented. Both neural networks have been evaluated using control and pattern recognition examples.; The hardware design for the two new structures has been investigated. Hardware efficiency has been improved with expensive multipliers replaced by less expensive functions. Overall hardware designs for the MSN and SESN networks have been presented. It is possible to fabricate them onto a single chip using today's integrated circuit technology for real-time applications. In either structure, learned information is stored using analog floating-gate memory that facilitates the programmability and long-term storage. Parallel updating of stored weights are implemented and thus, only a single writing cycle is required to update all related weights for a given training sample. It is expected that the power consumption is low and a large number of processing units (submodules) can be packed to handle complex function learning.
机译:径向基函数网络是一种众所周知的结构,用于在学习系统中实现静态映射。它使用一组径向基函数,每个函数都控制本地输入区域。一个缺点是,当输入维数增加时,基函数分量的数量可能会突然增加。如果使用的基函数不仅仅限于一种特定形式,例如高斯函数,则可以使此问题变得容易。使用产品总和神经网络(SOPNN)结构可以实现灵活性。这项研究调查了SOPNN的两种备选方案,以实现可能的有效硬件实现;两者都打算消除高成本的乘法。第一个结构称为最小和网络,它使用最小函数来近似乘法。第二种结构称为Sum-Exponential-Sum Network,它使用对数值的总和来计算乘积(。,是一个对数乘数)。对数乘法器是一个加法器,可以一次有效地处理许多输入值的乘法。已经得出了学习规则,并且已经实现了学习算法。这两个神经网络都已使用控制和模式识别示例进行了评估。研究了两种新结构的硬件设计。硬件效率得到提高,昂贵的乘法器被价格较低的功能所取代。已经介绍了MSN和SESN网络的总体硬件设计。使用当今用于实时应用的集成电路技术,可以将它们制造到单个芯片上。在这两种结构中,学​​习到的信息都使用模拟浮栅存储器进行存储,这有利于可编程性和长期存储。实现了存储权重的并行更新,因此,仅需一个写入周期即可更新给定训练样本的所有相关权重。期望功耗低,并且可以打包大量处理单元(子模块)以处理复杂的功能学习。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号