首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Multimodal Future Localization and Emergence Prediction for Objects in Egocentric View With a Reachability Prior
【24h】

Multimodal Future Localization and Emergence Prediction for Objects in Egocentric View With a Reachability Prior

机译:具有可到达性先验的自我中心视图中的对象的多峰未来定位和涌现预测

获取原文

摘要

In this paper, we investigate the problem of anticipating future dynamics, particularly the future location of other vehicles and pedestrians, in the view of a moving vehicle. We approach two fundamental challenges: (1) the partial visibility due to the egocentric view with a single RGB camera and considerable field-of-view change due to the egomotion of the vehicle; (2) the multimodality of the distribution of future states. In contrast to many previous works, we do not assume structural knowledge from maps. We rather estimate a reachability prior for certain classes of objects from the semantic map of the present image and propagate it into the future using the planned egomotion. Experiments show that the reachability prior combined with multi-hypotheses learning improves multimodal prediction of the future location of tracked objects and, for the first time, the emergence of new objects. We also demonstrate promising zero-shot transfer to unseen datasets.
机译:在本文中,我们从行驶中的车辆的角度研究了预测未来动态的问题,尤其是其他车辆和行人的未来位置。我们面临两个基本挑战:(1)由于使用单个RGB摄像机的以自我为中心的视图而导致的部分可见性,以及由于车辆的自我运动而导致的相当大的视野变化; (2)未来状态分布的多态性。与以前的许多作品相比,我们不假设地图具有结构性知识。我们宁愿从当前图像的语义图中估计某些类别的对象的可及性,然后使用计划的自我运动将其传播到未来。实验表明,可到达性先验与多假设学习相结合,改进了对跟踪对象的未来位置的多模态预测,并首次提高了新对象的出现。我们还演示了有希望的零击转移到看不见的数据集的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号