首页> 外文会议>IEEE International Conference on Advanced Video and Signal Based Surveillance >Online multi-modal task-driven dictionary learning and robust joint sparse representation for visual tracking
【24h】

Online multi-modal task-driven dictionary learning and robust joint sparse representation for visual tracking

机译:在线多模式任务驱动字典学习和鲁棒的联合稀疏表示,用于视觉跟踪

获取原文

摘要

Robust visual tracking is a challenging problem due to pose variance, occlusion and cluttered backgrounds. No single feature can be robust to all possible scenarios in a video sequence. However, exploiting multiple features has demonstrated its effectiveness in overcoming challenging situations in visual tracking. We propose a new framework for multi-modal fusion at both the feature level and decision level by training a reconstructive and discriminative dictionary and classifier for each modality simultaneously with the additional constraint of label consistency across different modalities. In addition, a joint decision measure is designed based on both reconstruction and classification error to adaptively adjust the weights of different features such that unreliable features can be removed from tracking. The proposed tracking scheme is referred to as the label-consistent and fusion-based joint sparse coding (LC-FJSC). Extensive experiments on publicly available videos demonstrate that LC-FJSC outperforms state-of-the-art trackers.
机译:由于姿势变化,遮挡和背景混乱,强大的视觉跟踪是一个具有挑战性的问题。任何单个功能都不能对视频序列中的所有可能场景都具有鲁棒性。但是,利用多种功能已证明其在克服视觉跟踪中具有挑战性的情况方面的有效性。我们通过在每个模态上同时训练一个重构性和区分性词典和分类器,同时在不同模态之间附加标签一致性的约束,提出了一个新的框架,用于特征级和决策级的多模态融合。另外,基于重建和分类误差两者设计联合决策度量,以自适应地调整不同特征的权重,从而可以从跟踪中去除不可靠的特征。提出的跟踪方案称为标签一致和基于融合的联合稀疏编码(LC-FJSC)。在公开视频上进行的大量实验表明,LC-FJSC的性能优于最新的跟踪器。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号