首页> 外文会议> >A logarithmic neural network architecture for unbounded non-linear function approximation
【24h】

A logarithmic neural network architecture for unbounded non-linear function approximation

机译:用于无界非线性函数逼近的对数神经网络架构

获取原文

摘要

Multilayer feedforward neural networks with sigmoidal activation functions have been termed "universal function approximators". Although these types of networks can approximate any continuous function to a desired degree of accuracy, this approximation may require an inordinate number of hidden nodes and is only accurate over a finite interval. These short comings are due to the standard multilayer perceptron's (MLP) architecture not being well suited to unbounded non-linear function approximation. A new architecture incorporating a logarithmic hidden layer proves to be superior to the standard MLP for unbounded non-linear function approximation. This architecture uses a percentage error objective function and a gradient descent training algorithm. Non-linear function approximation examples are used to show the increased accuracy of this new architecture over both the standard MLP and the logarithmically transformed MLP.
机译:具有S形激活函数的多层前馈神经网络已被称为“通用函数逼近器”。尽管这些类型的网络可以将任何连续函数近似到所需的精度,但是这种近似可能需要过多数量的隐藏节点,并且仅在有限的时间间隔内才是精确的。这些不足之处是由于标准多层感知器(MLP)体系结构不适用于无界非线性函数逼近。事实证明,结合对数隐藏层的新体系结构在无界非线性函数逼近方面优于标准MLP。该体系结构使用百分比误差目标函数和梯度下降训练算法。非线性函数逼近示例用于显示此新体系结构在标准MLP和对数转换后的MLP上提高的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号