首页> 外文会议>International Conference on Computer Vision >FreiHAND: A Dataset for Markerless Capture of Hand Pose and Shape From Single RGB Images
【24h】

FreiHAND: A Dataset for Markerless Capture of Hand Pose and Shape From Single RGB Images

机译:FreiHAND:用于从单个RGB图像无标记地捕获手势和形状的数据集

获取原文

摘要

Estimating 3D hand pose from single RGB images is a highly ambiguous problem that relies on an unbiased training dataset. In this paper, we analyze cross-dataset generalization when training on existing datasets. We find that approaches perform well on the datasets they are trained on, but do not generalize to other datasets or in-the-wild scenarios. As a consequence, we introduce the first large-scale, multi-view hand dataset that is accompanied by both 3D hand pose and shape annotations. For annotating this real-world dataset, we propose an iterative, semi-automated `human-in-the-loop' approach, which includes hand fitting optimization to infer both the 3D pose and shape for each sample. We show that methods trained on our dataset consistently perform well when tested on other datasets. Moreover, the dataset allows us to train a network that predicts the full articulated hand shape from a single RGB image. The evaluation set can serve as a benchmark for articulated hand shape estimation.
机译:从单个RGB图像估计3D手势是一个高度不确定的问题,它依赖于无偏训练数据集。在本文中,我们在对现有数据集进行训练时分析跨数据集的泛化。我们发现,这些方法在训练过的数据集上效果很好,但不能推广到其他数据集或野外场景。因此,我们引入了第一个大规模,多视图的手部数据集,该数据集同时带有3D手部姿势和形状注释。为了给这个真实世界的数据集添加注释,我们提出了一种迭代的,半自动化的“人在环内”方法,该方法包括手动拟合优化以推断每个样本的3D姿态和形状。我们表明,在其他数据集上进行测试时,在我们的数据集上训练的方法始终具有良好的性能。此外,数据集使我们能够训练一个网络,该网络可以从单个RGB图像中预测出完整的关节手形状。该评估集可以用作关节手形状估计的基准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号