首页> 外文会议>International Conference on Computer Vision >You'll Never Walk Alone: Modeling Social Behavior for Multi-target Tracking
【24h】

You'll Never Walk Alone: Modeling Social Behavior for Multi-target Tracking

机译:你永远不会独自行走:为多目标跟踪建模社会行为

获取原文

摘要

Object tracking typically relies on a dynamic model to predict the object's location from its past trajectory. In crowded scenarios a strong dynamic model is particularly important, because more accurate predictions allow for smaller search regions, which greatly simplifies data association. Traditional dynamic models predict the location for each target solely based on its own history, without taking into account the remaining scene objects. Collisions are resolved only when they happen. Such an approach ignores important aspects of human behavior: people are driven by their future destination, take into account their environment, anticipate collisions, and adjust their trajectories at an early stage in order to avoid them. In this work, we introduce a model of dynamic social behavior, inspired by models developed for crowd simulation. The model is trained with videos recorded from birds-eye view at busy locations, and applied as a motion model for multi-people tracking from a vehicle-mounted camera. Experiments on real sequences show that accounting for social interactions and scene knowledge improves tracking performance, especially during occlusions.
机译:对象跟踪通常依赖于动态模型,以从过去的轨迹中预测对象的位置。在拥挤的情景中,强大的动态模型尤为重要,因为更准确的预测允许较小的搜索区域,这大大简化了数据关联。传统的动态模型仅基于自己的历史来预测每个目标的位置,而不考虑其余的场景对象。碰撞仅在发生时解决。这样的方法忽略了人类行为的重要方面:人们被他们未来的目的地驱动,考虑到他们的环境,预测碰撞,并在早期阶段调整他们的轨迹,以避免它们。在这项工作中,我们介绍了一种动态社会行为的型号,灵感来自于为人群仿真开发的模型。该模型采用繁忙地点的鸟瞰图录制的视频培训,并作为从车载相机的多人跟踪的运动模型。实际序列的实验表明,社交互动和场景知识的核算提高了跟踪性能,尤其是在闭塞期间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号