首页> 外文期刊>IEEE Transactions on Circuits and Systems for Video Technology >Deep Continuous Conditional Random Fields With Asymmetric Inter-Object Constraints for Online Multi-Object Tracking
【24h】

Deep Continuous Conditional Random Fields With Asymmetric Inter-Object Constraints for Online Multi-Object Tracking

机译:具有不对称对象间约束的深连续条件随机场用于在线多对象跟踪

获取原文
获取原文并翻译 | 示例
           

摘要

Online multi-object tracking (MOT) is a challenging problem and has many important applications including intelligence surveillance, robot navigation, and autonomous driving. In existing MOT methods, individual object's movements and inter-object relations are mostly modeled separately and relations between them are still manually tuned. In addition, inter-object relations are mostly modeled in a symmetric way, which we argue is not an optimal setting. To tackle those difficulties, in this paper, we propose a deep continuous conditional random field (DCCRF) for solving the online MOT problem in a track-by-detection framework. The DCCRF consists of unary and pairwise terms. The unary terms estimate tracked objects' displacements across time based on visual appearance information. They are modeled as deep convolution neural networks, which are able to learn discriminative visual features for tracklet association. The asymmetric pairwise terms model inter-object relations in an asymmetric way, which encourages high-confidence tracklets to help correct errors of low-confidence tracklets and not to be affected by low-confidence ones much. The DCCRF is trained in an end-to-end manner for better adapting the influences of visual information as well as inter-object relations. Extensive experimental comparisons with state-of-the-arts as well as detailed component analysis of our proposed DCCRF on two public benchmarks demonstrate the effectiveness of our proposed MOT framework.
机译:在线多目标跟踪(MOT)是一个具有挑战性的问题,它具有许多重要的应用程序,包括智能监视,机器人导航和自动驾驶。在现有的MOT方法中,单个对象的运动和对象间关系大多是单独建模的,并且它们之间的关系仍需手动调整。此外,对象间关系大多以对称方式建模,我们认为这不是最佳设置。为了解决这些困难,在本文中,我们提出了一个深连续条件随机场(DCCRF),以解决逐轨检测框架中的在线MOT问题。 DCCRF由一元和成对术语组成。一元项会根据视觉外观信息估算跟踪对象随时间的位移。它们被建模为深度卷积神经网络,该网络能够学习用于小波关联的判别视觉特征。非对称成对项以非对称方式对对象间关系进行建模,这鼓励高置信度子集帮助纠正低置信度子集的错误,而不会受到低置信度子集的很大影响。 DCCRF以端到端的方式进行培训,以更好地适应视觉信息以及对象间关系的影响。与最新技术进行的广泛实验比较以及在两个公共基准上对我们提出的DCCRF进行的详细组件分析证明了我们提出的MOT框架的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号