首页> 外文会议>IEEE International Conference on Computer Vision;ICCV 2009 >You'll never walk alone: Modeling social behavior for multi-target tracking
【24h】

You'll never walk alone: Modeling social behavior for multi-target tracking

机译:您将永远不会孤单:为多目标跟踪建模社交行为

获取原文
获取外文期刊封面目录资料

摘要

Object tracking typically relies on a dynamic model to predict the object's location from its past trajectory. In crowded scenarios a strong dynamic model is particularly important, because more accurate predictions allow for smaller search regions, which greatly simplifies data association. Traditional dynamic models predict the location for each target solely based on its own history, without taking into account the remaining scene objects. Collisions are resolved only when they happen. Such an approach ignores important aspects of human behavior: people are driven by their future destination, take into account their environment, anticipate collisions, and adjust their trajectories at an early stage in order to avoid them. In this work, we introduce a model of dynamic social behavior, inspired by models developed for crowd simulation. The model is trained with videos recorded from birds-eye view at busy locations, and applied as a motion model for multi-people tracking from a vehicle-mounted camera. Experiments on real sequences show that accounting for social interactions and scene knowledge improves tracking performance, especially during occlusions.
机译:对象跟踪通常依赖于动态模型来根据对象的过去轨迹预测对象的位置。在拥挤的场景中,强大的动态模型尤为重要,因为更准确的预测可允许使用较小的搜索区域,从而大大简化了数据关联。传统的动态模型仅根据其自身的历史来预测每个目标的位置,而不考虑剩余的场景对象。冲突只有在发生时才能解决。这种方法忽略了人类行为的重要方面:人们受未来目的地的驱使,要考虑到他们的环境,预料到碰撞,并在早期阶段调整其轨迹以避免它们。在这项工作中,我们引入了动态的社会行为模型,其灵感来自为人群模拟开发的模型。该模型使用在忙碌地点从鸟瞰处录制的视频进行训练,并用作运动模型,以便从车载摄像头进行多人跟踪。对真实序列的实验表明,考虑社交互动和场景知识可以提高跟踪性能,尤其是在遮挡期间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号