首页> 外文期刊>Computer vision and image understanding >Tracking object poses in the context of robust body pose estimates
【24h】

Tracking object poses in the context of robust body pose estimates

机译:在稳健的人体姿势估计的情况下跟踪对象的姿势

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

This work focuses on tracking objects being used by humans. These objects are often small, fast moving and heavily occluded by the user. Attempting to recover their 3D position and orientation over time is a challenging research problem. To make progress we appeal to the fact that these objects are often used in a consistent way. The body poses of different people using the same object tend to have similarities, and, when considered relative to those body poses, so do the respective object poses. Our intuition is that, in the context of recent advances in body-pose tracking from RGB-D data, robust object-pose tracking during human-object interactions should also be possible. We propose a combined generative and discriminative tracking framework able to follow gradual changes in object-pose over time but also able to re-initialise object-pose upon recognising distinctive body-poses. The framework is able to predict object-pose relative to a set of independent coordinate systems, each one centred upon a different part of the body. We conduct a quantitative investigation into which body parts serve as the best predictors of object-pose over the course of different interactions. We find that while object-translation should be predicted from nearby body parts, object-rotation can be more robustly predicted by using a much wider range of body parts. Our main contribution is to provide the first object-tracking system able to estimate 3D translation and orientation from RGB-D observations of human-object interactions. By tracking precise changes in object-pose, our method opens up the possibility of more detailed computational reasoning about human-object interactions and their outcomes. For example, in assistive living systems that go beyond just recognising the actions and objects involved in everyday tasks such as sweeping or drinking, to reasoning that a person has "missed sweeping under the chair" or "not drunk enough water today".
机译:这项工作着重于追踪人类正在使用的物体。这些物体通常很小,移动迅速并且被使用者严重遮挡。尝试随着时间的推移恢复其3D位置和方向是一个充满挑战的研究问题。为了取得进步,我们呼吁这些对象经常以一致的方式使用。使用同一对象的不同人的身体姿势往往具有相似性,并且相对于这些身体姿势考虑时,各个对象姿势也是如此。我们的直觉是,在从RGB-D数据进行人体姿势跟踪的最新进展的背景下,在人与对象交互过程中进行稳健的对象-姿势跟踪也是可能的。我们提出了一种生成式和判别式组合的跟踪框架,该框架既可以跟踪随时间变化的物体姿势,又可以在识别出独特的身体姿势后重新初始化物体姿势。该框架能够预测相对于一组独立坐标系的对象-姿势,每个坐标系以身体的不同部位为中心。我们进行了定量研究,其中在不同的交互作用过程中,身体部位是最佳的对象姿势预测器。我们发现,虽然应该从附近的身体部位预测对象平移,但可以通过使用范围更广的身体部位来更可靠地预测对象旋转。我们的主要贡献是提供第一个能够从人与物交互的RGB-D观察估计3D平移和方向的物体跟踪系统。通过跟踪对象-姿势的精确变化,我们的方法打开了有关人与对象之间的相互作用及其结果的更详细的计算推理的可能性。例如,在辅助生活系统中,不仅可以识别扫掠或饮水等日常任务中涉及的动作和物体,还可以推断出一个人“没有在椅子上扫荡”或“今天没有喝足够的水”。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号