首页> 外文OA文献 >Analysis of Co-training Algorithm with Very Small Training Sets
【2h】

Analysis of Co-training Algorithm with Very Small Training Sets

机译:具有非常小的训练集合训练算法的分析

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Co-training is a well known semi-supervised learning algorithm, in which two classifiers are trained on two different views (feature sets): the initially small training set is iteratively updated with unlabelled samples classified with high confidence by one of the two classifiers. In this paper we address an issue that has been overlooked so far in the literature, namely, how co-training performance is affected by the size of the initial training set, as it decreases to the minimum value below which a given learning algorithm can not be applied anymore. In this paper we address this issue empirically, testing the algorithm on 24 real datasets artificially splitted in two views, using two different base classifiers. Our results show that a very small training set, even made up of one only labelled sample per class, does not adversely affect co-training performance.
机译:共同培训是一种众所周知的半监督学习算法,其中两个分类器在两个不同的视图上培训(功能集):最初的小型训练集被迭代地更新,其中未标记的样本被两个分类器中的一个置于高信心。在本文中,我们解决了到目前为止的文献中被忽视的问题,即如何训练性能受初始训练集的大小的影响,因为它降至下面的最小值,给定的学习算法不能申请了。在本文中,我们经验地解决了这个问题,使用两个不同的基本分类器在两种视图中人工分割的24实时数据集上的算法。我们的结果表明,一个非常小的训练集,甚至由每个班级唯一标记的样本组成,不会对共同培训表现产生不利影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号