首页> 外文期刊>ETRI journal >Function Approximation Based on a Network with Kernel Functions of Bounds and Locality : an Approach of Non-Parametric Estimation
【24h】

Function Approximation Based on a Network with Kernel Functions of Bounds and Locality : an Approach of Non-Parametric Estimation

机译:具有边界和局部核函数的网络的函数逼近:一种非参数估计的方法

获取原文
           

摘要

This paper presents function approximation based on nonparametric estimation. As an estimation model of function approximation, a three layered network composed of input, hidden and output layers is considered. The input and output layers have linear activation units while the hidden layer has nonlinear activation units or kernel functions which have the characteristics of bounds and locality. Using this type of network, a many-to-one function is synthesized over the domain of the input space by a number of kernel functions. In this network, we have to estimate the necessary number of kernel functions as well as the parameters associated with kernel functions. For this purpose, a new method of parameter estimation in which linear learning rule is applied between hidden and output layers while nonlinear (piecewise-linear) learning rule is applied between input and hidden layers, is considered. The linear learning rule updates the output weights between hidden and output layers based on the Linear Minimization of Mean Square Error (LMMSE) sense in the space of kernel functions while the nonlinear learning rule updates the parameters of kernel functions based on the gradient of the actual output of network with respect to the parameters (especially, the shape) of kernel functions. This approach of parameter adaptation provides near optimal values of the parameters associated with kernel functions in the sense of minimizing mean square error. As a result, the suggested nonparametric estimation provides an efficient way of function approximation from the view point of the number of kernel functions as well as learning speed.
机译:本文提出了基于非参数估计的函数逼近。作为函数逼近的估算模型,考虑了由输入,隐藏和输出层组成的三层网络。输入和输出层具有线性激活单元,而隐藏层具有非线性激活单元或核函数,这些函数或函数具有边界和局部性。使用这种类型的网络,许多内核功能在输入空间的域上综合了多对一功能。在这个网络中,我们必须估计内核功能的必要数量以及与内核功能相关的参数。为此,考虑一种新的参数估计方法,其中在隐藏层和输出层之间应用线性学习规则,而在输入层和隐藏层之间应用非线性(分段线性)学习规则。线性学习规则基于核函数空间中的均方误差的线性最小化(LMMSE)感知,更新隐藏层和输出层之间的输出权重,而非线性学习规则基于实际梯度,更新核函数的参数。相对于内核函数的参数(尤其是形状)的网络输出。在最小化均方误差的意义上,这种参数自适应方法提供了与内核函数相关的参数的最佳值。结果,从核函数的数量以及学习速度的角度来看,建议的非参数估计提供了一种有效的函数近似方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号