首页> 外文期刊>Quality Control, Transactions >Semi-Global Context Network for Semantic Correspondence
【24h】

Semi-Global Context Network for Semantic Correspondence

机译:半全局语境网络用于语义对应

获取原文
获取原文并翻译 | 示例
       

摘要

Estimating semantic correspondence between pairs of images can be challenging as a result of intra-class variation, background clutter, and repetitive patterns. This paper proposes a convolutional neural network (CNN) that attempts to learn rich semantic representations that contain the global semantic context to enable robust semantic correspondence estimation against intra-class variation and repetitive patterns. We introduce a global context fused feature representation that efficiently employs the global semantic context in estimating semantic correspondence as well as a semi-global self-similarity feature to reduce background clutter-induced distraction in capturing the global semantic context. The proposed network is trained in an end-to-end manner using a weakly supervised loss, which requires a weak level of supervision involving annotation on image pairs. This weakly supervised loss is supplemented with a historical averaging loss to effectively train the network. Our approach decreases running time by a factor of more than four and reduces the training memory requirement by a factor of three and produces competitive or superior results relative to previous approaches on the PF-PASCAL, PF-WILLOW, and TSS benchmarks.
机译:由于类帧内变化,背景杂波和重复模式,估计图像对之间的语义对应可以具有挑战性。本文提出了一种卷积神经网络(CNN),其试图学习包含全局语义上下文的丰富语义表示,以实现针对类内变型和重复模式的强大的语义对应估计。我们介绍了一个全局背景融合特征表示,其有效地在估计语义对应中使用全局语义上下文以及半全局自我相似性特征,以减少在捕获全局语义上下文时的背景杂波引起的分心。使用弱监督损失,所提出的网络以端到端的方式培训,这需要涉及图像对的注释的弱水平。这种弱监督损失补充了历史平均损失,以有效培训网络。我们的方法将运行时间减少超过四个,并将培训记忆需求减少三倍,并产生相对于PF-Pascal,PF-Willow和TSS基准测试的先前方法的竞争或优越的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号