首页> 外文期刊>Neurocomputing >Pixel and feature level based domain adaptation for object detection in autonomous driving
【24h】

Pixel and feature level based domain adaptation for object detection in autonomous driving

机译:基于像素和特征级别的域自适应,用于自动驾驶中的目标检测

获取原文
获取原文并翻译 | 示例

摘要

Annotating large-scale datasets to train modern convolutional neural networks is prohibitively expensive and time-consuming for many real tasks. One alternative is to train the model on labeled synthetic datasets and apply it in the real scenes. However, this straightforward method often fails to generalize well mainly due to the domain bias between the synthetic and real datasets. Many unsupervised domain adaptation (UDA) methods were introduced to address this problem but most of them only focused on the simple classification task. This paper presents a novel UDA model which integrates both image and feature level based adaptations to solve the cross-domain object detection problem. We employ objectives of the generative adversarial network and the cycle consistency loss for image translation. Furthermore, region proposal based feature adversarial training and classification are proposed to further minimize the domain shifts and preserve the semantics of the target objects. Extensive experiments are conducted on several different adaptation scenarios, and the results demonstrate the robustness and superiority of the proposed method. (C) 2019 Elsevier B.V. All rights reserved.
机译:注释大型数据集以训练现代卷积神经网络对于许多实际任务而言是极其昂贵且耗时的。一种替代方法是在标记的合成数据集上训练模型并将其应用于真实场景。但是,这种简单的方法通常不能很好地概括,这主要是由于合成数据集和实际数据集之间存在域偏差。引入了许多无监督域自适应(UDA)方法来解决此问题,但大多数方法仅关注简单的分类任务。本文提出了一种新颖的UDA模型,该模型结合了基于图像和特征级别的自适应,以解决跨域对象检测问题。我们采用生成对抗网络的目标和图像翻译的周期一致性损失。此外,提出了基于区域提议的特征对抗训练和分类,以进一步最小化域偏移并保留目标对象的语义。在几种不同的适应方案上进行了广泛的实验,结果证明了该方法的鲁棒性和优越性。 (C)2019 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号