...
首页> 外文期刊>Image and Vision Computing >Cuepervision: self-supervised learning for continuous domain adaptation without catastrophic forgetting
【24h】

Cuepervision: self-supervised learning for continuous domain adaptation without catastrophic forgetting

机译:Cuepervision:用于连续域适应的自我监督学习,没有灾难性的遗忘

获取原文
获取原文并翻译 | 示例
           

摘要

Perception systems, to a large extent, rely on neural networks. Commonly, the training of neural networks uses a finite amount of data. The usual assumption is that an appropriate training dataset is available, which covers all relevant domains. This abstract will follow the example of different lighting conditions in autonomous driving scenarios. In real-world datasets, a single source domain, such as day images, often dominates the dataset composition. This poses a risk to overfit on specific source domain features within the dataset, and implicitly breaches the assumption of full or relevant domain coverage. While applying the model to data outside of the source domain, the performance drops, posing a significant challenge for data-driven methods. A common approach is supervised retraining of the model on additional data. Supervised training requires the laborious acquisition and labeling of an adequate amount of data and often becomes infeasible when data augmentation strategies are not applicable. Furthermore, retraining on additional data often causes a performance drop in the source domain, so-called catastrophic forgetting. In this paper, we present a self-supervised continuous domain adaptation method. A model trained supervised on the source domain (day) is used to generate pseudo labels on the samples of an adjacent target domain (dawn). The pseudo labels and samples enable to fine-tune the existing model, which, as a result, is adapted into the intermediate domain. By iteratively repeating these steps, the model reaches the target domain (night). The results, of the novel method, on the MNIST dataset and its modification, the continuous rotatedMNIST dataset demonstrate a domain adaptation of 86.2%, and a catastrophic forgetting of only 1.6% in the target domain. The work contributes a hyperparameter ablation study, analysis, and discussion of the new learning strategy. (C) 2020 Elsevier B.V. All rights reserved.
机译:感知系统,在很大程度上依赖于神经网络。通常,神经网络的训练使用有限量的数据。通常的假设是提供适当的培训数据集,其涵盖了所有相关域。本摘要将遵循自动驾驶场景中的不同照明条件的示例。在现实世界数据集中,单个源域(如Day Images)通常占主导地位数据集组合。这对数据集中的特定源域功能构成了过度装备的风险,并且隐含地违反了完整或相关域覆盖范围的假设。在将模型应用于源域之外的数据时,性能下降,对数据驱动方法构成了重大挑战。一种常见的方法是在额外数据上监督模型的再培训。监督培训需要艰苦的收购和标记足够数量的数据,并且当数据增强策略不适用时通常会变得不可行。此外,在附加数据上刷新通常会导致源域中的性能下降,所谓的灾难性忘记。在本文中,我们提出了一种自我监督的连续域适应方法。在源域(日)上监督的模型用于在相邻目标域(Dawn)的样本上生成伪标签。伪标签和示例使能够微调现有模型,结果适用于中间域。通过迭代地重复这些步骤,该模型达到目标域(夜晚)。结果,在MNIST数据集及其修改的新方法中,连续旋转模型数据集展示了86.2%的域适应,并且在目标域中只有1.6%的灾难性忘记。该工作贡献了一种封锁率消融研究,分析和讨论新的学习策略。 (c)2020 Elsevier B.v.保留所有权利。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号