首页> 外文会议>International Conference on Computer Vision >Occlusion-Shared and Feature-Separated Network for Occlusion Relationship Reasoning
【24h】

Occlusion-Shared and Feature-Separated Network for Occlusion Relationship Reasoning

机译:遮挡关系推理的遮挡共享和特征分离网络

获取原文

摘要

Occlusion relationship reasoning demands closed contour to express the object, and orientation of each contour pixel to describe the order relationship between objects. Current CNN-based methods neglect two critical issues of the task: (1) simultaneous existence of the relevance and distinction for the two elements, i.e, occlusion edge and occlusion orientation; and (2) inadequate exploration to the orientation features. For the reasons above, we propose the Occlusion-shared and Feature-separated Network (OFNet). On one hand, considering the relevance between edge and orientation, two sub-networks are designed to share the occlusion cue. On the other hand, the whole network is split into two paths to learn the high semantic features separately. Moreover, a contextual feature for orientation prediction is extracted, which represents the bilateral cue of the foreground and background areas. The bilateral cue is then fused with the occlusion cue to precisely locate the object regions. Finally, a stripe convolution is designed to further aggregate features from surrounding scenes of the occlusion edge. The proposed OFNet remarkably advances the state-of-the-art approaches on PIOD and BSDS ownership dataset.
机译:遮挡关系推理要求闭合轮廓表示对象,每个轮廓像素的方向描述对象之间的顺序关系。当前基于CNN的方法忽略了该任务的两个关键问题:(1)同时存在两个元素的相关性和区别,即遮挡边缘和遮挡方向; (2)对定位特征的探索不足。基于以上原因,我们提出了一种遮挡共享和特征分离网络(OFNet)。一方面,考虑到边缘和方向之间的相关性,设计了两个子网来共享遮挡提示。另一方面,将整个网络分为两条路径以分别学习高级语义特征。此外,提取了用于定向预测的上下文特征,其表示前景和背景区域的双边提示。然后将双边提示与遮挡提示融合,以精确定位对象区域。最后,设计条纹卷积以进一步聚合遮挡边缘周围场景中的特征。拟议的OFNet显着提高了PIOD和BSDS所有权数据集的最新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号