首页> 外文会议>International Conference on Pattern Recognition >Video semantic segmentation using deep multi-view representation learning
【24h】

Video semantic segmentation using deep multi-view representation learning

机译:使用深度多视图表示学习视频语义分割

获取原文

摘要

In this paper, we propose a deep learning model based on deep multi-view representation learning, to address the video object segmentation task. The proposed model emphasizes the importance of the inherent correlation between video frames and incorporates a multi-view representation learning based on deep canonically correlated autoencoders. The multi-view representation learning in our model provides an efficient mechanism for capturing inherent correlations by jointly extracting useful features and learning better representation into a joint feature space, i.e., shared representation. To increase the training data and the learning capacity, we train the proposed model with pairs of video frames, i.e., Fa and Fb. During the segmentation phase, the deep canonically correlated auto encoders model encodes useful features by processing multiple reference frames together, which is used to detect the frequently reappearing. Our model enhances the state-of-the-art deep learning-based methods that mainly focus on learning discriminative foreground representations over appearance and motion. Experimental results over two large benchmarks demonstrate the ability of the proposed method to outperform competitive approaches and to reach good performances, in terms of semantic segmentation.
机译:在本文中,我们提出了一种基于深度多视图表示学习的深度学习模型,以解决视频对象分段任务。所提出的模型强调了视频帧之间固有相关性的重要性,并基于深度相关的自动码器的多视图表示学习。我们模型中的多视图表示学习提供了一种有效的机制,可以通过共同提取有用的特征和学习更好的表示来捕获固有的相关性,即共享表示。为了增加培训数据和学习能力,我们用一对视频帧,即FA和FB培训提出的模型。在分割阶段期间,深度经典相关自动编码器模型通过处理多个参考帧一起编码有用的特征,该参考帧用于检测经常重新出现。我们的模型增强了基于最先进的深度学习方法,主要关注在外观和运动中学习鉴别的前景陈述。两种大型基准测试结果表明,在语义分割方面,提出了拟议方法越来越竞争的方法和达到良好性能的能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号