首页> 外文会议>Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on >The use of on-line co-training to reduce the training set size in pattern recognition methods: Application to left ventricle segmentation in ultrasound
【24h】

The use of on-line co-training to reduce the training set size in pattern recognition methods: Application to left ventricle segmentation in ultrasound

机译:在线协同训练在模式识别方法中减少训练集大小的应用:在超声中对左心室分割的应用

获取原文
获取原文并翻译 | 示例

摘要

The use of statistical pattern recognition models to segment the left ventricle of the heart in ultrasound images has gained substantial attention over the last few years. The main obstacle for the wider exploration of this methodology lies in the need for large annotated training sets, which are used for the estimation of the statistical model parameters. In this paper, we present a new on-line co-training methodologythat reduces the need for large training sets for such parameter estimation. Our approach learns the initial parameters of two different models using a small manually annotated training set. Then, given each frame of a test sequence, the methodology not only produces the segmentation of the current frame, but it also uses the results of both classifiers to retrain each other incrementally. This on-line aspect of our approach has the advantages of producing segmentation results and retraining the classifiers on the fly as frames of a test sequence are presented, but it introduces a harder learning setting compared to the usual off-line co-training, where the algorithm has access to the whole set of un-annotated training samples from the beginning. Moreover, we introduce the use of the following new types of classifiers in the co-training framework: deep belief network and multiple model probabilistic data association. We show that our method leads to a fully automatic left ventricle segmentation system that achieves state-of-the-art accuracy on a public database with training sets containing at least twenty annotated images.
机译:在过去的几年中,使用统计模式识别模型在超声图像中分割心脏的左心室已经引起了广泛的关注。对该方法进行更广泛探索的主要障碍在于需要大量带注释的训练集,这些训练集用于估计统计模型参数。在本文中,我们提出了一种新的在线协同训练方法,该方法可减少此类参数估计所需的大型训练集。我们的方法使用一个小的手动注释训练集来学习两个不同模型的初始参数。然后,在给定测试序列的每个帧的情况下,该方法不仅会产生当前帧的分段,而且还会使用两个分类器的结果来相互进行增量训练。我们的方法的在线方面具有产生分段结果和在测试序列帧出现时即时重新训练分类器的优势,但是与通常的离线协同训练相比,它引入了更困难的学习设置该算法从一开始就可以访问整个未注释的训练样本集。此外,我们介绍了在联合训练框架中使用以下新类型的分类器:深度置信网络和多模型概率数据关联。我们表明,我们的方法导致了一种全自动的左心室分割系统,该系统在包含至少20个带批注图像的训练集的公共数据库上达到了最新的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号