...
首页> 外文期刊>IEEE Transactions on Intelligent Transportation Systems >A Virtual-Real Interaction Approach to Object Instance Segmentation in Traffic Scenes
【24h】

A Virtual-Real Interaction Approach to Object Instance Segmentation in Traffic Scenes

机译:流量场景中对象实例分段的虚拟实际交互方法

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Object instance segmentation in traffic scenes is an important research topic. For training instance segmentation models, synthetic data can potentially complement real data, alleviating manual effort on annotating real images. However, the data distribution discrepancy between synthetic data and real data hampers the wide applications of synthetic data. In light of that, we propose a virtual-real interaction method for object instance segmentation. This method works over synthetic images with accurate annotations and real images without any labels. The virtual-real interaction guides the model to learn useful information from synthetic data while keeping consistent with real data. We first analyze the data distribution discrepancy from a probabilistic perspective, and divide it into image-level and instance-level discrepancies. Then, we design two components to align these discrepancies, i.e., global-level alignment and local-level alignment. Furthermore, a consistency alignment component is proposed to encourage the consistency between the global-level and the local-level alignment components. We evaluate the proposed approach on the real Cityscapes dataset by adapting from virtual SYNTHIA, Virtual KITTI, and VIPER datasets. The experimental results demonstrate that it achieves significantly better performance than state-of-the-art methods.
机译:交通场景中的对象实例分段是一个重要的研究主题。对于培训实例分段模型,合成数据可能会补充真实数据,减轻了在注释真实图像上的手动努力。但是,合成数据与实际数据之间的数据分布差异扰乱了合成数据的广泛应用。鉴于此,我们提出了一种虚拟实际交互方法,用于对象实例分段。此方法在合成图像上使用合成图像以及无任何标签的真实图像。虚拟实际交互指导模型以从合成数据中学习有用信息,同时保持与实际数据一致。我们首先从概率的角度分析数据分布差异,并将其划分为图像级和实例级别的差异。然后,我们设计两个组件以对齐这些差异,即全局级别对齐和本地级别对齐。此外,提出了一种一致性对准组分,以促进全局级和局部对准组件之间的一致性。通过使用虚拟合成素,虚拟基准和Viper数据集来评估真实城市景观数据集的建议方法。实验结果表明,它的性能明显优于最先进的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号