首页> 外文OA文献 >A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos
【2h】

A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

机译:航空视频中视觉目标跟踪的平行时空显着性和鉴别在线学习方法

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
机译:在空中视频中的视觉跟踪是由于外观变化困难导致计算机视觉和遥感技术的具有挑战性的任务。外观变化是由相机和目标运动引起的,低分辨率噪声图像,缩放变化和姿态变化引起。已经提出了各种方法来处理航空视频中的外观变化困难,并且在这些方法中,时空效力检测方法报道了在移动目标检测的背景下的有希望的结果。然而,当在外观变化下进行视觉跟踪时,它不准确地移动目标检测。在本研究中,基于时空显着性和鉴别的在线学习方法提出了一种视觉跟踪方法,以处理外观变化困难。时间显着性用于表示移动目标区域,并且基于与Sauvola本地自适应阈值算法的帧差来提取。空间显着性用于表示候选移动区域中的目标外观细节。 SLIC Superpixel分割,颜色和时刻功能可用于计算显着测量的特征唯一性和空间紧凑性以检测空间显着性。这是一个耗时的过程,它提示开发并行算法来优化和分配加载到多处理器的显着性检测过程。然后通过组合时间和空间散发来代表移动目标来获得时空显着性。最后,应用了一种判别的在线学习算法以产生基于时空显着性的样本模型。然后将该样本模型逐步更新以检测外观变化条件中的目标。在鲜生动数据集上进行的实验表明,与最先进的方法相比,所提出的视觉跟踪方法是有效的,并且是计算的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号