...
首页> 外文期刊>Signal Processing. Image Communication: A Publication of the the European Association for Signal Processing >Adaptive weighted fusion with new spatial and temporal fingerprints for improved video copy detection
【24h】

Adaptive weighted fusion with new spatial and temporal fingerprints for improved video copy detection

机译:具有新的时空指纹的自适应加权融合,可改善视频复制检测

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this paper, we propose a new and novel modality fusion method designed for combining spatial and temporal fingerprint information to improve video copy detection performance. Most of the previously developed methods have been limited to use only pre-specified weights to combine spatial and temporal modality information. Hence, previous approaches may not adaptively adjust the significance of the temporal finger-prints that depends on the difference between the temporal variances of compared videos, leading to performance degradation in video copy detection. To overcome the aforementioned limitation, the proposed method has been devised to extract two types of fingerprint information: (1) spatial fingerprint that consists of the signs of DCT coefficients in local areas in a keyframe and (2) temporal fingerprint that computes the temporal variances in local areas in consecutive keyframes. In addition, the so-called temporal strength measurement technique is developed to quantitatively represent the amount of the temporal variances; it can be adaptively used to consider the significance of compared temporal fingerprints. The experimental results show that the proposed modality fusion method outperforms other state-of-the-arts fusion methods and popular spatio-temporal fingerprints in terms of video copy detection. Furthermore, the proposed method can save 39.0%, 25.1%, and 46.1% time complexities needed to perform video fingerprint matching without a significant loss of detection accuracy for our synthetic dataset, TRECVID 2009 CCD Task, and MUSCLE-VCD 2007, respectively. This result indicates that our proposed method can be readily incorporated into the real-life video copy detection systems.
机译:在本文中,我们提出了一种新颖的模态融合方法,该方法旨在结合时空指纹信息来提高视频拷贝检测性能。多数先前开发的方法仅限于仅使用预先指定的权重来组合空间和时间模态信息。因此,先前的方法可能无法自适应地调整取决于所比较视频的时间方差之间的时间指纹的重要性,从而导致视频复制检测的性能下降。为了克服上述限制,提出了一种方法来提取两种类型的指纹信息:(1)由关键帧局部区域中的DCT系数的符号组成的空间指纹,以及(2)计算时间方差的时间指纹在本地连续的关键帧中。另外,开发了所谓的时间强度测量技术来定量地表示时间变化量。它可以适应性地考虑比较的时间指纹的重要性。实验结果表明,在视频复制检测方面,所提出的模态融合方法优于其他最新的融合方法和流行的时空指纹。此外,对于我们的合成数据集,TRECVID 2009 CCD Task和MUSCLE-VCD 2007,该方法可以节省执行视频指纹匹配所需的39.0%,25.1%和46.1%的时间复杂度,而不会显着降低检测精度。该结果表明我们提出的方法可以很容易地并入现实生活中的视频拷贝检测系统中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号