...
首页> 外文期刊>Quality Control, Transactions >PoSeg: Pose-Aware Refinement Network for Human Instance Segmentation
【24h】

PoSeg: Pose-Aware Refinement Network for Human Instance Segmentation

机译:Poseg:人类实例分割的姿势感知细化网络

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Human instance segmentation is a core problem for human-centric scene understanding and segmenting human instances poses a unique challenge to vision systems due to large intra-class variations in both appearance and shape, and complicated occlusion patterns. In this paper, we propose a new pose-aware human instance segmentation method. Compared to the previous pose-aware methods which first predict bottom-up poses and then estimate instance segmentation on top of predicted poses, our method integrates both top-down and bottom-up cues for an instance: it adopts detection results as human proposals and jointly estimates human pose and instance segmentation for each proposal. We develop a modular recurrent deep network that utilizes pose estimation to refine instance segmentation in an iterative manner. Our refinement modules exploit pose cues in two levels: as a coarse shape prior and local part attention. We evaluate our approach on two public multi-person benchmarks: OCHuman dataset and COCOPersons dataset. The proposed method surpasses the state-of-the-art methods on OCHuman dataset by 3.0 mAP and on COCOPersons by 6.4 mAP, demonstrating the effectiveness of our approach.
机译:人类的分割是人类以人为本的场景的核心问题,由于外观和形状的阶层阶级的阶级内变化和复杂的遮挡模式,对视觉系统构成了独特的挑战。在本文中,我们提出了一种新的姿势感知人类实例分割方法。与先前的姿势感知方法相比,首先预测自下而上的姿势,然后在预测的姿势上估计实例分段,我们的方法为实例集成了自上而下和自下而上的线索:它采用检测结果作为人类提案和共同估计每个提案的人类姿势和实例细分。我们开发了一个模块化的反复性深网络,利用姿势估计以迭代方式改进实例分段。我们的细化模块在两个级别中利用构成线索:作为粗糙的形状和本地部分关注。我们评估了我们在两人的两人基准上的方法:Ochuman DataSet和Cocopersons DataSet。所提出的方法在6.0地图上超越了3.0地图和鳄鱼的ochuman数据集上的最先进的方法,展示了我们方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号