【24h】

Part-based Convolutional Network for Visual Tracking

机译:用于视觉跟踪的基于部分的卷积网络

获取原文

摘要

Recently, Convolution Neural Networks(CNNs), which provide an valuable end-to-end image representation, have been a hot topic in visual tracking. Benefiting from the receptive field and the deep structure, CNNs can extract the deep representation of the image, which can effectively solve the target deformation in the tracking process. However, because the convolution kernel of the CNN is globally shared, it will still get disturbed features and affect the robustness of the results in background clutters, illumination variation, and so on. In this paper, we propose a novel part-based convolution network for visual tracking, which incorporates the advantages of the part-based model and the CNN for a better performance. Extensive experimental results on the OTB2013 and OTB100 tracking benchmark demonstrate that the performance of our method compares competitive with some state-of-the-art trackers.
机译:最近,提供有价值的端到端图像表示的卷积神经网络(CNN)一直是视觉跟踪中的热门话题。受益于接收场和深层结构,CNN可以提取图像的深层表示,从而可以有效地解决跟踪过程中的目标变形。但是,由于CNN的卷积核是全局共享的,因此它仍然会受到干扰,并在背景混乱,光照变化等方面影响结果的鲁棒性。在本文中,我们提出了一种新颖的基于零件的卷积网络以进行视觉跟踪,该网络结合了基于零件的模型和CNN的优点,以实现更好的性能。 OTB2013和OTB100跟踪基准的大量实验结果表明,我们的方法的性能与某些最新的跟踪器相比具有竞争力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号