首页> 外文期刊>Neurocomputing >A multi-output two-stage locally regularized model construction method using the extreme learning machine
【24h】

A multi-output two-stage locally regularized model construction method using the extreme learning machine

机译:使用极限学习机的多输出两阶段局部正则化模型构造方法

获取原文
获取原文并翻译 | 示例

摘要

This paper investigates the construction of linear-in-the-parameters (LITP) models for multi-output regression problems. Most existing stepwise forward algorithms choose the regressor terms one by one, each time maximizing the model error reduction ratio. The drawback is that such procedures cannot guarantee a sparse model, especially under highly noisy learning conditions. The main objective of this paper is to improve the sparsity and generalization capability of a model for multi-output regression problems, while reducing the computational complexity. This is achieved by proposing a novel multi-output two-stage locally regularized model construction (MTLRMC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial multi-output LITP model is then generated according to the termination criteria in the first stage. The significance of each selected regressor is checked and the insignificant ones are replaced at the second stage. The proposed method can produce an optimized compact model by using the regularized parameters. Further, to reduce the computational complexity, a proper regression context is used to allow fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique.
机译:本文研究了用于多输出回归问题的线性参数(LITP)模型的构建。大多数现有的逐步正演算法每次都会最大程度地减少模型误差的减少率,一一选择回归项。缺点是此类程序无法保证稀疏模型,尤其是在嘈杂的学习条件下。本文的主要目的是提高模型的稀疏性和泛化能力,同时降低计算复杂度。这是通过提出一种使用极限学习机(ELM)的新颖的多输出两阶段局部正则化模型构造(MTLRMC)方法来实现的。在这种新算法中,每个项的非线性参数,例如高斯函数的宽度和多项式项的幂,首先由ELM确定。然后在第一阶段根据终止标准生成初始的多输出LITP模型。在第二阶段检查每个选定回归变量的重要性,并替换不重要的回归变量。所提出的方法可以通过使用正则化参数来产生优化的紧凑模型。另外,为了降低计算复杂度,使用适当的回归上下文来允许快速实施所提出的方法。仿真结果证实了该技术的有效性。

著录项

  • 来源
    《Neurocomputing》 |2014年第27期|104-112|共9页
  • 作者单位

    Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronical Engineering and Automation, Shanghai University, Shanghai 200072, China,School of Electronics, Electrical Engineering and Computer Science, Queens University Belfast, Belfast BT9 5 AH, UK;

    School of Electronics, Electrical Engineering and Computer Science, Queens University Belfast, Belfast BT9 5 AH, UK;

    Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronical Engineering and Automation, Shanghai University, Shanghai 200072, China;

    Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronical Engineering and Automation, Shanghai University, Shanghai 200072, China;

    Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronical Engineering and Automation, Shanghai University, Shanghai 200072, China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Extreme learning machine; Multi-output linear-in-the-parameters (LITP) model; Regularization; Two-stage stepwise selection;

    机译:极限学习机;多输出线性参数(LITP)模型;正则化;两阶段逐步选择;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号