首页> 外文期刊>IEEE Transactions on Image Processing >EnAET: A Self-Trained Framework for Semi-Supervised and Supervised Learning With Ensemble Transformations
【24h】

EnAET: A Self-Trained Framework for Semi-Supervised and Supervised Learning With Ensemble Transformations

机译:eNAET:一个自训练的半监督和监督学习与集合转换

获取原文
获取原文并翻译 | 示例
       

摘要

Deep neural networks have been successfully applied to many real-world applications. However, such successes rely heavily on large amounts of labeled data that is expensive to obtain. Recently, many methods for semi-supervised learning have been proposed and achieved excellent performance. In this study, we propose a new EnAET framework to further improve existing semi-supervised methods with self-supervised information. To our best knowledge, all current semi-supervised methods improve performance with prediction consistency and confidence ideas. We are the first to explore the role of self-supervised representations in semi-supervised learning under a rich family of transformations. Consequently, our framework can integrate the self-supervised information as a regularization term to further improve all current semi-supervised methods. In the experiments, we use MixMatch, which is the current state-of-the-art method on semi-supervised learning, as a baseline to test the proposed EnAET framework. Across different datasets, we adopt the same hyper-parameters, which greatly improves the generalization ability of the EnAET framework. Experiment results on different datasets demonstrate that the proposed EnAET framework greatly improves the performance of current semi-supervised algorithms. Moreover, this framework can also improve supervised learning by a large margin, including the extremely challenging scenarios with only 10 images per class. The code and experiment records are available in https://github.com/maple-research-lab/EnAET .
机译:深度神经网络已成功应用于许多现实世界应用。然而,这种成功严重依赖于获得昂贵的标记数据。最近,已经提出了许多用于半监督学习的方法,并实现了出色的性能。在这项研究中,我们提出了一种新的尖端框架,以进一步改善具有自我监督信息的现有半监督方法。为了我们的最佳知识,所有目前的半监督方法都提高了预测一致性和信心思想的性能。我们是第一个探索<粗体XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink “>在<粗体XMLNS:MML =”http://www.w3.org/1998/math/mathml“xmlns:xlink =”http://www.w3.org/1999中/ xlink“>半监督在丰富的转换系列下学习。因此,我们的框架可以将自我监督信息集成为正则化术语,以进一步改进<斜体XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http:// www.w3.org/1999/xlink“>所有当前半监督方法。在实验中,我们使用MixMatch,这是目前关于半监督学习的最先进的方法,作为测试所提出的eNAET框架的基线。在不同的数据集中,我们采用相同的超参数,这大大提高了eNAET框架的泛化能力。不同数据集上的实验结果表明,所提出的enaET框架大大提高了当前半监控算法的性能。此外,此框架还可以改进<粗体XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”>通过大余量监督学习,包括极具挑战性的场景,每班只有10张照片。代码和实验记录在 https://github.com/maple-research-lab/earaet

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号