首页> 外文期刊>IEEE transactions on visualization and computer graphics >Weakly Supervised Adversarial Learning for 3D Human Pose Estimation from Point Clouds
【24h】

Weakly Supervised Adversarial Learning for 3D Human Pose Estimation from Point Clouds

机译:从点云估算3D人类姿势估计的弱势对抗

获取原文
获取原文并翻译 | 示例
       

摘要

Point clouds-based 3D human pose estimation that aims to recover the 3D locations of human skeleton joints plays an important role in many AR/VR applications. The success of existing methods is generally built upon large scale data annotated with 3D human joints. However, it is a labor-intensive and error-prone process to annotate 3D human joints from input depth images or point clouds, due to the self-occlusion between body parts as well as the tedious annotation process on 3D point clouds. Meanwhile, it is easier to construct human pose datasets with 2D human joint annotations on depth images. To address this problem, we present a weakly supervised adversarial learning framework for 3D human pose estimation from point clouds. Compared to existing 3D human pose estimation methods from depth images or point clouds, we exploit both the weakly supervised data with only annotations of 2D human joints and fully supervised data with annotations of 3D human joints. In order to relieve the human pose ambiguity due to weak supervision, we adopt adversarial learning to ensure the recovered human pose is valid. Instead of using either 2D or 3D representations of depth images in previous methods, we exploit both point clouds and the input depth image. We adopt 2D CNN to extract 2D human joints from the input depth image, 2D human joints aid us in obtaining the initial 3D human joints and selecting effective sampling points that could reduce the computation cost of 3D human pose regression using point clouds network. The used point clouds network can narrow down the domain gap between the network input i.e. point clouds and 3D joints. Thanks to weakly supervised adversarial learning framework, our method can achieve accurate 3D human pose from point clouds. Experiments on the ITOP dataset and EVAL dataset demonstrate that our method can achieve state-of-the-art performance efficiently.
机译:基于点云的3D人类姿势估计,旨在恢复人骨架关节的3D位置在许多AR / VR应用中起着重要作用。现有方法的成功通常是在用3D人关节注释的大规模数据上建立的。然而,由于身体部位之间的自动闭塞以及3D点云的繁琐注释过程,它是一种劳动密集型和易于易于易于忽视从输入深度图像或点云的3D人类关节。同时,在深度图像上使用2D人类联合注释构建人类姿势数据集更容易。为了解决这个问题,我们为点云展示了一个弱监督的3D人类姿势估算的对抗性学习框架。与来自深度图像或点云的现有3D人类姿势估计方法相比,我们利用弱监管数据,只有2D人类关节的注释和完全监督数据,其中3D人类关节的注释。为了减轻由于监督薄弱而削弱人类的模糊性,我们采用对抗学习,以确保恢复的人类姿势有效。不是在以前的方法中使用深度图像的2D或3D表示,而不是使用Point云和输入深度图像。我们采用2D CNN从输入深度图像中提取2D人类关节,2D人类关节有助于我们获得初始3D人类关节并选择使用点云网络来降低3D人类姿势回归的计算成本的有效采样点。二手点云网络可以缩小网络输入I.点云和3D关节之间的域间隙。由于弱势监督的对抗性学习框架,我们的方法可以从点云中实现准确的3D人类姿势。 ITOP DataSet和EVAL DataSet上的实验表明,我们的方法可以有效地实现最先进的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号