首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >STCT: Sequentially Training Convolutional Networks for Visual Tracking
【24h】

STCT: Sequentially Training Convolutional Networks for Visual Tracking

机译:STCT:顺序训练卷积网络以进行视觉跟踪

获取原文

摘要

Due to the limited amount of training samples, finetuning pre-trained deep models online is prone to overfitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.
机译:由于训练样本数量有限,在线微调预先训练的深度模型很容易过度拟合。在本文中,我们提出了卷积神经网络(CNN)的顺序训练方法,以有效地将预训练的深度特征传递给在线应用。我们将CNN视为一个整体,输出特征图的每个通道都作为一个单独的基础学习者。每个基础学习者都使用不同的损失准则进行训练,以减少相关性并避免过度训练。为了在线上获得最佳合奏,将通过重要采样将所有基础学习者依次采样到合奏中。为了进一步提高每个基础学习者的鲁棒性,我们建议使用随机二进制掩码来训练卷积层,这可以作为正则化来强制每个基础学习者专注于不同的输入功能。所提出的在线训练方法通过传递在大量带注释的视觉数据上训练的深层特征而应用于视觉跟踪问题,并被证明可以显着提高跟踪性能。在两个具有挑战性的基准数据集上进行了广泛的实验,证明了我们的跟踪算法可以以相当大的幅度胜过最新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号