Recently, Convolution Neural Networks(CNNs), which provide an valuable end-to-end image representation, have been a hot topic in visual tracking. Benefiting from the receptive field and the deep structure, CNNs can extract the deep representation of the image, which can effectively solve the target deformation in the tracking process. However, because the convolution kernel of the CNN is globally shared, it will still get disturbed features and affect the robustness of the results in background clutters, illumination variation, and so on. In this paper, we propose a novel part-based convolution network for visual tracking, which incorporates the advantages of the part-based model and the CNN for a better performance. Extensive experimental results on the OTB2013 and OTB100 tracking benchmark demonstrate that the performance of our method compares competitive with some state-of-the-art trackers.
展开▼