首页> 外文期刊>Journal of visual communication & image representation >Adversarial learning for viewpoints invariant 3D human pose estimation
【24h】

Adversarial learning for viewpoints invariant 3D human pose estimation

机译:针对观点不变3D人体姿势估计的对抗学习

获取原文
获取原文并翻译 | 示例

摘要

2D pose estimation have achieve remarkable performance with deep convolutional neural networks. However 3D pose estimation is current constrained by the limited datasets of 3D annotations. Meanwhile most annotated images are captured using Motion Capture system in lab or certain studio, which has large variations with large-scale monocular 2D pose datasets. We propose an adversarial learning framework, which can learn invariant human pose latent from 3D annotated datasets to optimize the estimation of monocular images with only 2D annotations. However there is large difference in observation coordinates between 2D datasets and 3D datasets, and this viewpoints issue should be separated from invariant pose latent. We add a viewpoints invariant module to automatically regulate observation viewpoints for generated 3D pose, which transforming the generated pose to more suitable observation in the 3D datasets. Our method achieve competitive results on both 2D and 3D benchmarks. (C) 2018 Published by Elsevier Inc.
机译:2D姿态估计在深度卷积神经网络中取得了卓越的性能。但是,目前3D姿势估计受到3D注释的有限数据集的限制。同时,大多数带注释的图像是在实验室或某些工作室中使用Motion Capture系统捕获的,与大型单眼2D姿势数据集相比,这些图像有很大的差异。我们提出了一种对抗性学习框架,该框架可以从3D带注释的数据集中学习不变的人体姿势,以优化仅具有2D带注释的单眼图像的估计。但是,2D数据集和3D数据集之间的观测坐标存在很大差异,因此,此观点问题应与不变姿势潜伏性区分开。我们添加了一个视点不变模块,以自动调节生成的3D姿势的观测点,从而将生成的姿势转换为3D数据集中的更合适的观测值。我们的方法在2D和3D基准上均取得了竞争性结果。 (C)2018由Elsevier Inc.发布

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号