首页> 外文会议>International Conference on Computer Vision >FreiHAND: A Dataset for Markerless Capture of Hand Pose and Shape From Single RGB Images
【24h】

FreiHAND: A Dataset for Markerless Capture of Hand Pose and Shape From Single RGB Images

机译:freihand:来自单个RGB图像的手姿和形状无价值捕获的数据集

获取原文

摘要

Estimating 3D hand pose from single RGB images is a highly ambiguous problem that relies on an unbiased training dataset. In this paper, we analyze cross-dataset generalization when training on existing datasets. We find that approaches perform well on the datasets they are trained on, but do not generalize to other datasets or in-the-wild scenarios. As a consequence, we introduce the first large-scale, multi-view hand dataset that is accompanied by both 3D hand pose and shape annotations. For annotating this real-world dataset, we propose an iterative, semi-automated `human-in-the-loop' approach, which includes hand fitting optimization to infer both the 3D pose and shape for each sample. We show that methods trained on our dataset consistently perform well when tested on other datasets. Moreover, the dataset allows us to train a network that predicts the full articulated hand shape from a single RGB image. The evaluation set can serve as a benchmark for articulated hand shape estimation.
机译:估计来自单个RGB图像的3D手姿势是一种非常模糊的问题,依赖于无偏训练数据集。在本文中,我们在培训现有数据集时分析跨数据集概括。我们发现方法在他们接受培训的数据集上表现良好,但不会概括到其他数据集或野外情景。因此,我们介绍了第一个大规模的多视图手部数据集,其伴随着3D手姿势和形状注释。为了注释这一现实世界数据集,我们提出了一种迭代,半自动的“人类循环”方法,它包括手拟合优化,以推断每个样本的3D姿势和形状。我们展示在其他数据集上测试时,我们在数据集上培训的方法始终表现良好。此外,数据集允许我们训练从单个RGB图像预测完整铰接手形状的网络。评估集可以用作铰接手形估计的基准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号