首页> 外文学位 >Robust object tracking based on multiple cues.
【24h】

Robust object tracking based on multiple cues.

机译:基于多个提示的强大对象跟踪。

获取原文
获取原文并翻译 | 示例

摘要

Many real-world applications, e.g., video surveillance and video conferencing, require robust visual object tracking. However, due to the richness and large variations in the visual inputs, robust and efficient visual tracking in complex environments is still an open problem even after years of research. This dissertation is dedicated to achieving efficient and robust tracking of nonrigid objects based on multiple visual cues, constraints, and global prior knowledge of the objects. Data-driven algorithms and sampling-based methods are designed and integrated to fuse various sensors and prior knowledge for real-time probabilistic object tracking.; Two efficient data-driven tracking algorithms are proposed in Chapters 2 and 3, which focus on region-based and contour-based object tracking, respectively. The tracking is treated not only as a matching problem, but also as a grouping problem. The presented methods can incorporate both the foreground and background models to detect the new object position based on multiple visual cues (e.g., motion, color, and edges), spatial-temporal constraints, and prior knowledge of object dynamics and shape.; Chapter 4 addresses the issue of robustness by fusing multiple sensors and maintaining multiple hypotheses. An efficient hybrid Bayesian sensor fusion framework is proposed to combine the proposed individual data-driven trackers and high-level knowledge of the object likelihood model. Unlike the traditional sensor fusion algorithms, a closed-loop is formed to adapt, evaluate, and calibrate different data-driven trackers based on generative and discriminant models of the objects.; A real-time multiple-speaker tracking system based on sound source localization, color-blob, and object contour validate the efficacy of the proposed fusion framework.
机译:许多实际应用,例如视频监视和视频会议,需要强大的视觉对象跟踪。然而,由于视觉输入的丰富性和巨大差异,即使经过多年的研究,在复杂环境中进行强大而有效的视觉跟踪仍然是一个悬而未决的问题。本文致力于基于多种视觉提示,约束条件和全局全局先验知识,对非刚性物体进行高效,鲁棒的跟踪。设计并集成了数据驱动算法和基于采样的方法,以融合各种传感器和先验知识,以进行实时概率目标跟踪。在第2章和第3章中,提出了两种有效的数据驱动跟踪算法,分别关注基于区域和基于轮廓的对象跟踪。跟踪不仅被视为匹配问题,而且还被视为分组问题。所提出的方法可以结合前景和背景模型,以基于多个视觉提示(例如,运动,颜色和边缘),时空约束以及对象动力学和形状的先验知识来检测新对象的位置。第4章通过融合多个传感器并维持多个假设来解决鲁棒性问题。提出了一种有效的混合贝叶斯传感器融合框架,以结合提出的单个数据驱动的跟踪器和对象似然模型的高级知识。与传统的传感器融合算法不同,基于对象的生成和判别模型,形成了一个闭环来适应,评估和校准不同的数据驱动的跟踪器。基于声源定位,色球和物体轮廓的实时多扬声器跟踪系统验证了所提出融合框架的有效性。

著录项

  • 作者

    Chen, Yunqiang.;

  • 作者单位

    University of Illinois at Urbana-Champaign.;

  • 授予单位 University of Illinois at Urbana-Champaign.;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2002
  • 页码 81 p.
  • 总页数 81
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 自动化技术、计算机技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号