首页> 外文期刊>Neurocomputing >Combinatorial optimization of input features and learning parameters for decorrelated neural network ensemble-based soft measuring model
【24h】

Combinatorial optimization of input features and learning parameters for decorrelated neural network ensemble-based soft measuring model

机译:基于去相关神经网络集成的软测量模型的输入特征和学习参数的组合优化

获取原文
获取原文并翻译 | 示例
           

摘要

The decorrelated neural network ensemble (DNNE) algorithm can be used to construct an effective soft measuring model through an analytical solution comprising submodels of multiple randomized neural networks. However, DNNE exhibits one major shortcoming: the scope of the random input weights and biases is set to a default range of [-1, 1], which cannot ensure the universal approximation capability of the resulting prediction model. The three other learning parameters including the number of the ensemble submodels' hidden nodes, ensemble size, and regularizing factor of DNNE are also data dependent. Moreover, DNNE suffers from high computation complexity when processing a high-dimensional large-scale dataset. Feature selection can improve the interpretation of the model. To address the above-mentioned problems, a combinatorial optimization method based on the adaptive genetic algorithm is used to simultaneously optimize the input features and learning parameters of the DNNE model. The evolution processes of these modeling parameters are demonstrated in detail. Simulation results based on four datasets with different dimensions and sizes validate the effectiveness of the proposed approach. The results also indicate that the random parameter scope assignment significantly influences the generalization performance and the other learning parameters of the prediction model. (c) 2017 Elsevier B.V. All rights reserved.
机译:去相关神经网络集成(DNNE)算法可用于通过包含多个随机神经网络子模型的分析解决方案来构建有效的软测量模型。但是,DNNE有一个主要缺点:随机输入权重和偏差的范围设置为默认范围[-1,1],这不能确保所得预测模型的通用逼近能力。其他三个学习参数,包括整体子模型的隐藏节点数,整体大小和DNNE的正则化因子,也取决于数据。此外,当处理高维大规模数据集时,DNNE具有较高的计算复杂性。特征选择可以改善模型的解释。为了解决上述问题,基于自适应遗传算法的组合优化方法被用于同时优化DNNE模型的输入特征和学习参数。详细说明了这些建模参数的演变过程。基于具有不同维度和大小的四个数据集的仿真结果验证了该方法的有效性。结果还表明,随机参数范围分配显着影响预测模型的泛化性能和其他学习参数。 (c)2017 Elsevier B.V.保留所有权利。

著录项

  • 来源
    《Neurocomputing》 |2018年第31期|1426-1440|共15页
  • 作者单位

    Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China;

    Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China;

    Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing 210044, Jiangsu, Peoples R China;

    Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110004, Liaoning, Peoples R China;

    Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110004, Liaoning, Peoples R China;

    Natl Polytech Inst, CINVESTAV, Dept Control Automat, IPN, Mexico City 07360, DF, Mexico;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Decorrelated neural network ensembles (DNNE); Adaptive genetic algorithm (AGA); Random parameter scope assignment; Combinatorial optimization;

    机译:装饰相关神经网络集成(DNNE);自适应遗传算法(AGA);随机参数范围分配;组合优化;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号