首页> 美国卫生研究院文献>other >Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level
【2h】

Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

机译:结合各种分类器的稳健框架可在班级为各个分类器分配分布式置信度

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.
机译:我们提出了一个分类框架,该分类框架在存在类别标签噪声的情况下结合了多个异构分类器。提出了基于m-Mediods建模的扩展,该扩展生成各种类别的模型,同时识别和过滤嘈杂的训练数据。该无噪声数据还用于学习其他分类器(例如GMM和SVM)的模型。然后引入权重学习方法来为不同分类器学习每个类的权重,以构建整体。为此,我们应用遗传算法来搜索最佳的权重向量,在该向量上分类器集合有望提供最佳的准确性。在各种现实数据集上评估了所提出的方法。还将它与现有的标准集成技术(例如Adaboost,Bagging和Random Subspace Methods)进行比较。实验结果表明,与其他竞争者相比,该集成方法具有优越性,尤其是在存在类别标签噪声和不平衡类别的情况下。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号