首页> 外文期刊>Journal of visual communication & image representation >Siamese visual tracking with multilayer feature fusion and corner distance IoU loss
【24h】

Siamese visual tracking with multilayer feature fusion and corner distance IoU loss

机译:具有多层特征融合和角距 IoU 损失的连体视觉跟踪

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

The tracker based on the Siamese network regards tracking tasks as solving a similarity problem between the target template and search area. Using shallow networks and offline training, these trackers perform well in simple scenarios. However, due to the lack of semantic information, they have difficulty meeting the accuracy requirements of the task when faced with complex backgrounds and other challenging scenarios. In response to this problem, we propose a new model, which uses the improved ResNet-22 network to extract deep features with more semantic information. Multilayer feature fusion is used to obtain a high-quality score map to reduce the influence of interference factors in the complex background on the tracker. In addition, we propose a more powerful Corner Distance IoU (intersection over union) loss function so that the algorithm can better regression to the bounding box. In the experiments, the tracker was extensively evaluated on the object tracking benchmark data sets, OTB2013 and OTB2015, and the visual object tracking data sets, VOT2016 and VOT2017, and ach-ieved competitive performance, proving the effectiveness of this method.
机译:基于连体网络的跟踪器将跟踪任务视为解决目标模板与搜索区域之间的相似性问题。使用浅层网络和离线训练,这些跟踪器在简单场景中表现良好。然而,由于缺乏语义信息,它们在面对复杂的背景和其他具有挑战性的场景时难以满足任务的准确性要求。针对这一问题,我们提出了一种新的模型,该模型使用改进的ResNet-22网络来提取具有更多语义信息的深度特征。采用多层特征融合获得高质量的评分图,以降低复杂背景干扰因素对跟踪器的影响。此外,我们提出了一个更强大的角距离IoU(交并集)损失函数,以便算法能够更好地回归到边界框。在实验中,对跟踪器进行了广泛的评估,包括目标跟踪基准数据集OTB2013和OTB2015,以及视觉目标跟踪数据集、VOT2016和VOT2017,并提高了竞争性能,证明了该方法的有效性。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号