首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Data Distillation: Towards Omni-Supervised Learning
【24h】

Data Distillation: Towards Omni-Supervised Learning

机译:数据提炼:走向全监督学习

获取原文

摘要

We investigate omni-supervised learning, a special regime of semi-supervised learning in which the learner exploits all available labeled data plus internet-scale sources of unlabeled data. Omni-supervised learning is lower-bounded by performance on existing labeled datasets, offering the potential to surpass state-of-the-art fully supervised methods. To exploit the omni-supervised setting, we propose data distillation, a method that ensembles predictions from multiple transformations of unlabeled data, using a single model, to automatically generate new training annotations. We argue that visual recognition models have recently become accurate enough that it is now possible to apply classic ideas about self-training to challenging real-world data. Our experimental results show that in the cases of human keypoint detection and general object detection, state-of-the-art models trained with data distillation surpass the performance of using labeled data from the COCO dataset alone.
机译:我们研究了全监督学习,这是一种半监督学习的特殊机制,其中学习者利用所有可用的标记数据以及互联网规模的未标记数据来源。全监督学习在现有标记数据集上的性能受到较低的限制,提供了超越最新的全监督方法的潜力。为了利用全监督环境,我们提出了数据提纯方法,该方法使用单个模型将来自未标记数据的多次转换的预测汇总在一起,以自动生成新的训练注释。我们认为视觉识别模型最近变得足够精确,以至于现在可以将有关自我训练的经典思想应用于具有挑战性的现实数据。我们的实验结果表明,在人类关键点检测和通用对象检测的情况下,经过数据蒸馏训练的最新模型优于仅使用来自COCO数据集的标记数据的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号