...
首页> 外文期刊>IEEE Transactions on Pattern Analysis and Machine Intelligence >Online Object Tracking, Learning and Parsing with And-Or Graphs
【24h】

Online Object Tracking, Learning and Parsing with And-Or Graphs

机译:使用“或”或“图”进行在线对象跟踪,学习和解析

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

This paper presents a method, called AOGTracker, for simultaneously tracking, learning and parsing (TLP) of unknown objects in video sequences with a hierarchical and compositional And-Or graph (AOG) representation. The TLP method is formulated in the Bayesian framework with a spatial and a temporal dynamic programming (DP) algorithms inferring object bounding boxes on-the-fly. During online learning, the AOG is discriminatively learned using latent SVM [1] to account for appearance (e.g., lighting and partial occlusion) and structural (e.g., different poses and viewpoints) variations of a tracked object, as well as distractors (e.g., similar objects) in background. Three key issues in online inference and learning are addressed: (i) maintaining purity of positive and negative examples collected online, (ii) controling model complexity in latent structure learning, and (iii) identifying critical moments to re-learn the structure of AOG based on its intrackability. The intrackability measures uncertainty of an AOG based on its score maps in a frame. In experiments, our AOGTracker is tested on two popular tracking benchmarks with the same parameter setting: the TB-100/50/CVPR2013 benchmarks [2] , [3] , and the VOT benchmarks [4] -VOT 2013, 2014, 2015 and TIR2015 (thermal imagery tracking). In the former, our AOGTracker outperforms state-of-the-art tracking algorithms including two trackers based on deep convolutional network [5] , [6] . In the latter, our AOGTracker outperforms all other trackers in VOT2013 and is comparable to the state-of-the-art methods in VOT2014, 2015 and TIR2015.
机译:本文提出了一种称为AOGTracker的方法,该方法可以同时跟踪,学习和解析视频序列中具有分层和组成与/或图(AOG)表示的未知对象。在贝叶斯框架中用空间和时间动态规划(DP)算法来制定TLP方法,该算法可实时推断对象边界框。在在线学习期间,使用潜在的SVM [1]区别性地学习AOG,以解决被跟踪对象以及干扰物(例如干扰物)的外观(例如光照和部分遮挡)和结构(例如不同的姿势和视点)的变化。类似的对象)。解决了在线推理和学习中的三个关键问题:(i)保持在线收集的正例和负例的纯度;(ii)在潜在结构学习中控制模型的复杂性;以及(iii)确定关键时刻以重新学习AOG的结构基于其可追踪性。可追踪性基于一帧中的AOG得分图来度量其不确定性。在实验中,我们的AOGTracker在具有相同参数设置的两个流行跟踪基准上进行了测试:TB-100 / 50 / CVPR2013基准[2],[3]和VOT基准[4] -VOT 2013、2014、2015和TIR2015(热成像跟踪)。在前者中,我们的AOGTracker优于包括两个基于深度卷积网络的跟踪器的最新跟踪算法[5],[6]。在后者中,我们的AOGTracker优于VOT2013中的所有其他跟踪器,可与VOT2014、2015和TIR2015中的最新方法相媲美。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号