首页> 外文会议>International Joint Conference on Neural Networks >Lifting 2d Human Pose to 3d : A Weakly Supervised Approach
【24h】

Lifting 2d Human Pose to 3d : A Weakly Supervised Approach

机译:将2d人的姿势提升到3d:一种弱监督的方法

获取原文

摘要

Estimating 3d human pose from monocular images is a challenging problem due to the variety and complexity of human poses and the inherent ambiguity in recovering depth from the single view. Recent deep learning based methods show promising results by using supervised learning on 3d pose annotated datasets. However, the lack of large-scale 3d annotated training data captured under in-the-wild settings makes the 3d pose estimation difficult for in-the-wild poses. Few approaches have utilized training images from both 3d and 2d pose datasets in a weakly-supervised manner for learning 3d poses in unconstrained settings. In this paper, we propose a method which can effectively predict 3d human pose from 2d pose using a deep neural network trained in a weakly-supervised manner on a combination of ground-truth 3d pose and ground-truth 2d pose. Our method uses re-projection error minimization as a constraint to predict the 3d locations of body joints, and this is crucial for training on data where the 3d ground-truth is not present. Since minimizing re-projection error alone may not guarantee an accurate 3d pose, we also use additional geometric constraints on skeleton pose to regularize the pose in 3d. We demonstrate the superior generalization ability of our method by cross-dataset validation on a challenging 3d benchmark dataset MPI-INF-3DHP containing in the wild 3d poses.
机译:由于人的姿势的多样性和复杂性以及从单一视图恢复深度的内在含糊性,从单眼图像估计3d人的姿势是一个具有挑战性的问题。最近的基于深度学习的方法通过在3d姿势标注数据集上使用监督学习显示出令人鼓舞的结果。但是,由于缺乏在野外设置下捕获的大规模3d带注释的训练数据,因此难以对野外姿势进行3d姿势估计。很少有方法以弱监督的方式利用来自3d和2d姿态数据集的训练图像来学习不受约束的3d姿态。在本文中,我们提出了一种方法,该方法可以使用在地面实况3d姿势和地面实况2d姿势的组合上以弱监督的方式训练的深度神经网络,从2d姿势有效地预测3d人的姿势。我们的方法使用最小重投影误差作为约束来预测人体关节的3d位置,这对于训练不存在3d地面真相的数据至关重要。由于仅靠最小化重新投影误差可能无法保证准确的3d姿势,因此我们还对骨架姿势使用了额外的几何约束来规范3d姿势。我们通过对包含在野生3d姿势中的具有挑战性的3d基准数据集MPI-INF-3DHP进行跨数据集验证,证明了我们的方法具有出色的泛化能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号