首页> 外文会议>European conference on computer vision >Towards Viewpoint Invariant 3D Human Pose Estimation
【24h】

Towards Viewpoint Invariant 3D Human Pose Estimation

机译:迈向视点不变的3D人类姿势估计

获取原文

摘要

We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space. Formulated as a multi-task learning problem, our model is able to selectively predict partial poses in the presence of noise and occlusion. Our approach leverages a convolutional and recurrent network architecture with a top-down error feedback mechanism to self-correct previous pose estimates in an end-to-end manner. We evaluate our model on a previously published depth dataset and a newly collected human pose dataset containing 100 K annotated depth images from extreme viewpoints. Experiments show that our model achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints.
机译:我们提出了用于从单个深度图像进行3D人体姿势估计的视点不变模型。为此,我们的判别模型将局部区域嵌入到学习的视点不变特征空间中。公式化为多任务学习问题,我们的模型能够在存在噪声和遮挡的情况下选择性预测部分姿势。我们的方法利用卷积和递归网络架构以及自上而下的错误反馈机制,以端到端的方式自我校正先前的姿态估计。我们在一个以前发布的深度数据集和一个新收集的人体姿态数据集上评估了我们的模型,该数据集包含了从极端角度出发的100 K带注释的深度图像。实验表明,我们的模型在正面视图上具有竞争性表现,而在其他观点上则具有最先进的表现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号