首页> 外文期刊>Neurocomputing >A novel automatic two-stage locally regularized classifier construction method using the extreme learning machine
【24h】

A novel automatic two-stage locally regularized classifier construction method using the extreme learning machine

机译:基于极限学习机的新型自动两阶段局部正则化分类器构造方法

获取原文
获取原文并翻译 | 示例

摘要

This paper investigates the design of a linear-in-the-parameters (LITP) regression classifier for two-class problems. Most existing algorithms generally learn a classifier (model) from the available training data based on some stopping criterions, such as the Akaike's final prediction error (FPE). The drawback here is that the classifier obtained is then not directly obtained based on its generalization capability. The main objective of this paper is to improve the sparsity and generalization capability of a classifier, while reducing the computational expense in producing it. This is achieved by proposing an automatic two-stage locally regularized classifier construction (TSLRCC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial classifier is then generated by the direct evaluation of these candidates models according to the leave-one-out (LOO) misclassification rate in the first stage. The significance of each selected regressor term is also checked and insignificant ones are replaced in the second stage. To reduce the computational complexity, a proper regression context is defined which allows fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique.
机译:本文研究了针对两类问题的参数线性回归(LITP)回归分类器的设计。大多数现有算法通常基于一些停止准则(例如赤池的最终预测误差(FPE))从可用的训练数据中学习分类器(模型)。此处的缺点是,然后基于其泛化能力无法直接获得所获得的分类器。本文的主要目的是提高分类器的稀疏性和泛化能力,同时减少产生分类器的计算量。通过提出一种使用极限学习机(ELM)的自动两阶段局部正则化分类器构造(TSLRCC)方法来实现这一目标。在这种新算法中,每个项的非线性参数,例如高斯函数的宽度和多项式项的幂,首先由ELM确定。然后,通过在第一阶段根据留一法(LOO)错误分类率对这些候选模型进行直接评估来生成初始分类器。在第二阶段中,还将检查每个选定回归项的重要性,并替换不重要的项。为了降低计算复杂度,定义了适当的回归上下文,该上下文允许快速实施所提出的方法。仿真结果证实了该技术的有效性。

著录项

  • 来源
    《Neurocomputing》 |2013年第15期|10-22|共13页
  • 作者单位

    Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronical Engineering and Automation, Shanghai University, Shanghai 200072, China,School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast BT9 5 AH, UK;

    School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast BT9 5 AH, UK;

    School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast BT9 5 AH, UK;

    School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast BT9 5 AH, UK;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    classification; extreme learning machine; leave-one-out (LOO) misclassification rate; linear-in-the-parameters model; regularization; two-stage stepwise selection;

    机译:分类;极限学习机;留一法(LOO)错误分类率;参数线性模型正规化;两阶段逐步选择;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号