...
首页> 外文期刊>Knowledge-Based Systems >Semi-supervised Dual-Branch Network for image classification
【24h】

Semi-supervised Dual-Branch Network for image classification

机译:用于图像分类的半监控双分支网络

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this work, we reveal an essential problem rarely discussed in current semi-supervised learning literatures: the learned feature distribution mismatch problem between labeled samples and unlabeled samples. It is common knowledge that learning from the limited labeled data easily leads to overfitting. However, the difference between the inferred labels of unlabeled data and the ground truths of labeled data may make the learned features of labeled and unlabeled data have different distributions. This distribution mismatch problem may destroy the assumption of smoothness widely used in semi-supervised field, resulting in unsatisfactory performance. In this paper, we propose a novel Semi-supervised Dual-Branch Network (SDB-Net), in which the first branch is trained with labeled and unlabeled data, and the other is trained with the predictions of unlabeled data generated from the first branch only. To avoid the different distributions between ground-truth labels and inferred labels for the unlabeled data, we proposed an effective co-consistency loss to overcome the mismatch problem and a mix-consistency loss to make each branch learn a consistent feature representation. Meanwhile, we designed an augmentation supervised loss for the first branch to further alleviate the mismatch problem. With the designed three kinds of losses, the proposed SDB-Net can be efficiently trained. The experimental results on three benchmark datasets, such as CIFAR-10, CIFAR-100 and SVHN, show the superior performance of the proposed SDB-Net. (C) 2020 Elsevier B.V. All rights reserved.
机译:在这项工作中,我们揭示了当前半监督学习文献中很少讨论的重要问题:标记样本和未标记样本之间的学习功能分发不匹配问题。很常识,从有限标记数据中学习很容易导致过度装修。然而,未标记数据的推断标签与标记数据的地面真理之间的差异可以使标记和未标记数据的学习特征具有不同的分布。这种分布不匹配问题可能会破坏半监督领域广泛使用的平滑度的假设,从而导致表现不令人满意。在本文中,我们提出了一种新型半监控的双分支网络(SDB-Net),其中第一分支通过标记和未标记的数据训练,另一个分支培训,另一个是从第一分支生成的未标记数据的预测培训只要。为了避免地面真理标签和推断标签之间的不同分布,我们提出了一种有效的共持续性损失,以克服不匹配问题,并使每个分支学习一致的特征表示。同时,我们为第一分支设计了一个增强监督损失,以进一步缓解不匹配问题。通过设计的三种损失,可以有效地培训所提出的SDB网。在三个基准数据集中的实验结果,如CiFar-10,CiFar-100和SVHN,显示了所提出的SDB网的卓越性能。 (c)2020 Elsevier B.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号