首页> 外文期刊>Pattern recognition letters >Linear classifier combination and selection using group sparse regularization and hinge loss
【24h】

Linear classifier combination and selection using group sparse regularization and hinge loss

机译:使用组稀疏正则化和铰链损失进行线性分类器组合和选择

获取原文
获取原文并翻译 | 示例
       

摘要

The main principle of stacked generalization is using a second-level generalizer to combine the outputs of base classifiers in an ensemble. In this paper, after presenting a short survey of the literature on stacked generalization, we propose to use regularized empirical risk minimization (RERM) as a framework for learning the weights of the combiner which generalizes earlier proposals and enables improved learning methods. Our main contribution is using group sparsity for regularization to facilitate classifier selection. In addition, we propose and analyze using the hinge loss instead of the conventional least squares loss. We performed experiments on three different ensemble setups with differing diversities on 13 real-world datasets of various applications. Results show the power of group sparse regularization over the conventional l_1 norm regularization. We are able to reduce the number of selected classifiers of the diverse ensemble without sacrificing accuracy. With the non-diverse ensembles, we even gain accuracy on average by using group sparse regularization. In addition, we show that the hinge loss outperforms the least squares loss which was used in previous studies of stacked generalization.
机译:堆栈泛化的主要原理是使用二级泛化器将基本分类器的输出组合在一起。在本文中,在对有关堆叠泛化的文献进行简短调查之后,我们建议使用正则化的经验风险最小化(RERM)作为学习组合器权重的框架,该框架对早期的建议进行泛化,并能够改进学习方法。我们的主要贡献是使用组稀疏性进行正则化,以方便分类器选择。此外,我们建议并使用铰链损耗代替常规的最小二乘损耗进行分析。我们在13种不同应用程序的真实世界数据集上,对三种具有不同多样性的集合设置进行了实验。结果表明,与传统的l_1范数正则化相比,组稀疏正则化的功能强大。我们能够在不牺牲准确性的情况下减少各种集成分类器的数量。使用非多元合奏,我们甚至通过使用组稀疏正则化平均获得准确性。此外,我们证明了铰链损耗的性能优于最小化的最小二乘损耗,后者在先前的堆栈泛化研究中已被使用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号