首页> 外文OA文献 >Simplified neural networks algorithms for function approximation and regression boosting on discrete input spaces
【2h】

Simplified neural networks algorithms for function approximation and regression boosting on discrete input spaces

机译:用于离散输入空间上的函数逼近和回归增强的简化神经网络算法

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Function approximation capabilities of feedforward Neural Networks have been widely investigated over the past couple of decades. There has been quite a lot of work carried out in order to prove 'Universal Approximation Property' of these Networks. Most of the work in application of Neural Networks for function approximation has concentrated on problems where the input variables are continuous. However, there are many real world examples around us in which input variables constitute only discrete values, or a significant number of these input variables are discrete. Most of the learning algorithms proposed so far do not distinguish between different features of continuous and discrete input spaces and treat them in more or less the same way. Due to this reason, corresponding learning algorithms becomes unnecessarily complex and time consuming, especially when dealing with inputs mainly consisting of discrete variables. More recently, it has been shown that by focusing on special features of discrete input spaces, more simplified and robust algorithms can be developed. The main objective of this work is to address the function approximation capabilities of Artificial Neural Networks. There is particular emphasis on development, implementation, testing and analysis of new learning algorithms for the Simplified Neural Network approximation scheme for functions defined on discrete input spaces. By developing the corresponding learning algorithms, and testing with different benchmarking data sets, it is shown that comparing conventional multilayer neural networks for approximating functions on discrete input spaces, the proposed simplified neural network architecture and algorithms can achieve similar or better approximation accuracy. This is particularly the case when dealing with high dimensional-low sample cases, but with a much simpler architecture and less parameters. In order to investigate wider implications of simplified Neural Networks, their application has been extended to the Regression Boosting frame work. By developing, implementing and testing with empirical data it has been shown that these simplified Neural Network based algorithms also performs well in other Neural Network based ensembles.
机译:在过去的几十年中,对前馈神经网络的函数逼近功能进行了广泛的研究。为了证明这些网络的“通用逼近性质”,已经进行了大量的工作。神经网络在函数逼近中的大部分工作都集中在输入变量是连续的问题上。但是,我们周围有许多现实世界的示例,其中输入变量仅构成离散值,或者这些输入变量中有很大一部分是离散的。到目前为止,提出的大多数学习算法都无法区分连续输入空间和离散输入空间的不同特征,并且几乎以相同的方式对待它们。由于这个原因,相应的学习算法变得不必要地复杂和费时,尤其是在处理主要由离散变量组成的输入时。最近,已经表明,通过关注离散输入空间的特殊功能,可以开发出更加简化和强大的算法。这项工作的主要目的是解决人工神经网络的功能逼近能力。简化神经网络逼近方案的新学习算法的开发,实施,测试和分析特别受重视,该算法适用于在离散输入空间上定义的函数。通过开发相应的学习算法,并使用不同的基准数据集进行测试,结果表明,将传统的多层神经网络与离散输入空间上的逼近函数进行比较,所提出的简化神经网络架构和算法可以实现相似或更高的逼近精度。当处理高维低样本的情况时,尤其如此,但是架构要简单得多,参数要少。为了研究简化神经网络的更广泛含义,它们的应用已扩展到回归增强框架。通过开发,实施和测试经验数据,已证明这些简化的基于神经网络的算法在其他基于神经网络的集成中也表现良好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号