首页> 外文会议>IEEE International Conference on Control Automation >Joint sparse learning for classification ensemble
【24h】

Joint sparse learning for classification ensemble

机译:联合稀疏学习分类集成

获取原文

摘要

Ensemble methods use multiple classifiers to achieve better decisions than could be achieved using any of the constituent classifiers alone. However, both theoretical and experimental evidence have shown that very large ensembles are not necessarily superior, and small ensembles can often achieve better results. In this paper, we show how to combine a set of weak classifiers into a robust ensemble by using a joint sparse representation method, which assigns a sparse coefficient vector to the decision of each classifier. The sparse vector contains many zero entries, and thus the final ensemble only employs a small number of classifiers, corresponding to non-zero entries. Training data are partitioned into several sub-groups to generate sub-underdetermined systems. The joint sparse method enables these sub-groups to then share their information about individual classifiers, to obtain an improved overall classification. Partitioning the training dataset into subgroups makes the proposed joint sparse ensemble method parallelizable, making it suitable for large scale problems. In contrast, previous work on sparse approaches to ensemble learning was limited to datasets smaller than the number of classifiers. Two different strategies are described for generating the sub-underdetermined systems, and experiments show these to be effective when tested with two different data manipulation methods. Experiments evaluate the performance of the joint sparse ensemble learning method in comparison to five other state-of-the-art methods from the literature, each designed to train small and efficient ensembles. Results suggest that joint sparse ensemble learning outperforms other algorithms on most datasets.
机译:与单独使用任何一个组成分类器相比,集合方法使用多个分类器可以实现更好的决策。但是,理论和实验证据均表明,非常大的合奏不一定具有优势,而小的合奏通常可以达到更好的效果。在本文中,我们展示了如何使用联合稀疏表示方法将一组弱分类器组合成一个鲁棒的集合,该方法将稀疏系数向量分配给每个分类器的决策。稀疏向量包含许多零条目,因此最终合奏仅采用少量分类器,对应于非零条目。训练数据被划分为几个子组,以生成未确定的子系统。联合稀疏方法使这些子组可以共享有关单个分类器的信息,以获得改进的总体分类。将训练数据集划分为子组,使所提出的联合稀疏集合方法可并行化,使其适用于大规模问题。相反,以前关于整体学习的稀疏方法的工作仅限于小于分类器数量的数据集。描述了两种不同的策略来生成未确定的系统,并且实验表明,在使用两种不同的数据操作方法进行测试时,它们是有效的。与文献中的其他五种最新技术相比,实验评估了联合稀疏集成学习方法的性能,每种方法都旨在训练小型高效的合奏。结果表明,在大多数数据集上,联合稀疏集合学习的性能优于其他算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号