首页> 外文期刊>ACM Transactions on Graphics >Single Depth View Based Real-Time Reconstruction of Hand-Object Interactions
【24h】

Single Depth View Based Real-Time Reconstruction of Hand-Object Interactions

机译:基于单一深度视图的实时重建手对象交互

获取原文
获取原文并翻译 | 示例

摘要

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.
机译:由于强闭合和复杂的动作,重建手对象交互是一个具有挑战性的任务。本文提出了一种使用单一深度流的实时系统,同时重建手姿势,物体形状和刚性/非刚性运动。为实现这一目标,我们首先培训一个联合学习网络,将手和对象分段为深度图像,并预测手的3D关键点。对于由两个任务共享的大多数层,为实时性能保存计算成本。这里构建了一个混合数据集以使用真实数据(学习真实世界的分发)和合成数据(以涵盖对象,运动和视点的变体)培训网络。接下来,两个目标的深度和关键点用于均匀优化以重建相互作用运动。从新颖的切向接触约束中受益,该系统不仅解决了剩余的含糊不点,还可以保持实时性能。实验表明,我们的系统处理不同的手和物体形状,各种交互式运动和移动摄像机。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号