首页> 外文期刊>Pattern recognition letters >Video Object Matching Across Multiple Independent Views Using Local Descriptors And Adaptive Learning
【24h】

Video Object Matching Across Multiple Independent Views Using Local Descriptors And Adaptive Learning

机译:使用本地描述符和自适应学习跨多个独立视图进行视频对象匹配

获取原文
获取原文并翻译 | 示例
           

摘要

Object detection and tracking is an essential preliminary task in event analysis systems (e.g. visual surveillance). Typically objects are extracted and tagged, forming representative tracks of their activity. Tagging is usually performed by probabilistic data association, however, in systems capturing disjoint areas it is often not possible to establish such associations, as data may have been collected at different times or in different locations. In this case, appearance matching is a valuable aid.rnWe propose using bag-of-visterms, i.e. an histogram of quantized local feature descriptors, to represent and match tracked objects. This method has proven to be effective for object matching and classification in image retrieval applications, where descriptors can be extracted a priori. An important difference in event analysis systems is that relevant information is typically restricted to the foreground. Descriptors can, therefore, be extracted faster, approaching real-time requirements. Also, unlike image retrieval, objects can change over time and therefore their model needs to be updated continuously. Incremental or adaptive learning is used to tackle this problem. Using independent tracks of 30 different persons, we show that the bag-of-visterms representation effectively discriminates visual object tracks and that it presents high resilience to incorrect object segmentation. Additionally, this methodology allows the construction of scalable object models that can be used to match tracks across independent views.
机译:对象检测和跟踪是事件分析系统(例如视觉监视)中必不可少的基本任务。通常,将对象提取并标记,以形成其活动的代表性轨迹。标记通常是通过概率数据关联来执行的,但是,在捕获不相交区域的系统中,通常无法建立这种关联,因为数据可能是在不同时间或在不同位置收集的。在这种情况下,外观匹配是一种有价值的帮助。我们建议使用视觉组合袋(即量化的局部特征描述符的直方图)来表示和匹配跟踪的对象。实践证明,该方法对于图像检索应用中的对象匹配和分类有效,在该应用中,可以先验提取描述符。事件分析系统中的一个重要区别是相关信息通常仅限于前台。因此,描述符可以更快地提取,接近实时需求。此外,与图像检索不同,对象会随时间变化,因此需要不断更新其模型。增量学习或适应性学习用于解决此问题。使用30个不同人员的独立轨迹,我们显示了视果袋表示法可以有效地区分视觉对象轨迹,并且它对错误的对象分割具有很高的适应性。此外,此方法允许构建可伸缩对象模型,该模型可用于匹配独立视图之间的轨道。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号