【24h】

Egocentric Future Localization

机译:以自我为中心的未来本地化

获取原文

摘要

We presents a method for future localization: to predict plausible future trajectories of ego-motion in egocentric stereo images. Our paths avoid obstacles, move between objects, even turn around a corner into space behind objects. As a byproduct of the predicted trajectories, we discover the empty space occluded by foreground objects. One key innovation is the creation of an EgoRetinal map, akin to an illustrated tourist map, that 'rearranges' pixels taking into accounts depth information, the ground plane, and body motion direction, so that it allows motion planning and perception of objects on one image space. We learn to plan trajectories directly on this EgoRetinal map using first person experience of walking around in a variety of scenes. In a testing phase, given an novel scene, we find multiple hypotheses of future trajectories from the learned experience. We refine them by minimizing a cost function that describes compatibility between the obstacles in the EgoRetinal map and trajectories. We quantitatively evaluate our method to show predictive validity and apply to various real world daily activities including walking, shopping, and social interactions.
机译:我们提出了一种未来定位的方法:在以自我为中心的立体图像中预测自我运动的合理未来轨迹。我们的路径可以避开障碍物,在物体之间移动,甚至绕过一个角落进入物体后方的空间。作为预测轨迹的副产品,我们发现前景物体遮挡的空白空间。一个主要的创新是创建了一个EgoRetinal地图,类似于图示的旅游地图,它在考虑深度信息,地平面和人体运动方向的情况下“重新排列”了像素,从而可以进行运动规划和感知物体图像空间。我们使用在各种场景中四处走动的第一人称视角,学习直接在此EgoRetinal地图上计划轨迹。在测试阶段,给定一个新颖的场景,我们从所学的经验中发现未来轨迹的多种假设。我们通过最小化描述EgoRetinal地图中的障碍物和轨迹之间的兼容性的成本函数来优化它们。我们定量评估我们的方法以显示预测的有效性,并将其应用于各种现实世界的日常活动,包括步行,购物和社交互动。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号