首页> 外文会议>International conference on intelligent computing;CICI 2009 >A Constrained Approximation Algorithm by Encoding Second-Order Derivative Information into Feedforward Neural Networks
【24h】

A Constrained Approximation Algorithm by Encoding Second-Order Derivative Information into Feedforward Neural Networks

机译:通过将二阶导数信息编码到前馈神经网络中的约束逼近算法

获取原文

摘要

In this paper, a constrained learning algorithm is proposed for function approximation. The algorithm incorporates constraints into single hidden layered feedforward neural networks from the a priori information of the approximated function. The activation functions of the hidden neurons are specific polynomial functions based on Taylor series expansions, and the connection weight constraints are obtained from the second-order derivative information of the approximated function. The new algorithm has been shown by experimental results to have better generalization performance than other traditional learning ones.
机译:本文提出了一种约束学习算法,用于函数逼近。该算法根据近似函数的先验信息将约束条件合并到单个隐藏的分层前馈神经网络中。隐藏神经元的激活函数是基于泰勒级数展开的特定多项式函数,并且从权重近似函数的二阶导数信息获得连接权重约束。实验结果表明,该新算法具有比其他传统学习算法更好的泛化性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号