首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking
【24h】

Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking

机译:训练集的自适应去污:区分性视觉跟踪的统一公式

获取原文

摘要

Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be downweighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3:8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets.
机译:通过检测跟踪的方法已经证明了近年来的竞争性能。在这些方法中,跟踪模型在很大程度上取决于训练集的质量。由于标记的训练数据数量有限,跟踪器本身需要提取并标记其他样本。由于阻塞,错位和其他干扰,这经常导致包含损坏的训练样本。现有的按检测跟踪的方法要么忽略此问题,要么采用单独的组件来管理训练集。我们提出了一种新颖的通用方法来缓解检测跟踪框架中训练样本损坏的问题。我们的方法通过估计样本的质量来动态管理训练集。与现有方法相反,我们通过在目标外观模型和样品质量权重上最小化单个损失的方式,提出了一个统一的表述。联合制定方案可以使损坏的样品减轻重量,同时增加正确样品的影响。实验在三个基准上进行:OTB-2015(带100个视频),VOT-2015(带60个视频)和Temple-Color(带128个视频)。在OTB-2015上,我们统一的公式显着改善了基线,平均重叠精度提高了3:8%。最后,我们的方法在所有三个数据集上都获得了最新的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号