Most of the state-of-art approaches to Query-by-Example (QBE) video retrieval are based on the Bag-of-visual-Words (BovW) representation of visual content. It, however, ig- nores the spatial-temporal information, which is important for similarity measurement between videos. Direct incorpo- ration of such information into the video data representa- tion for a large scale data set is computationally expensive in terms of storage and similarity measurement. It is also static regardless of the change of discriminative power of vi- sual words with respect to di↵erent queries. To tackle these limitations, in this paper, we propose to discover Spatial- Temporal Correlations (STC) imposed by the query exam- ple to improve the BovW model for video retrieval. The STC, in terms of spatial proximity and relative motion co- herence between di↵erent visual words, is crucial to identify the discriminative power of the visual words. We develop a novel technique to emphasize the most discriminative visual words for similarity measurement, and incorporate this STC-based approach into the standard inverted index archi- tecture. Our approach is evaluated on the TRECVID2002 and CC WEB VIDEO datasets for two typical QBE video retrieval tasks respectively. The experimental results demon- strate that it substantially improves the BovW model as well as a state of the art method that also utilizes spatial- temporal information for QBE video retrieval.
展开▼
机译:大多数示例查询(QBE)视频检索的最新方法都是基于视觉内容的视觉词袋(BovW)表示。但是,它忽略了时空信息,这对于视频之间的相似度测量很重要。就存储和相似性度量而言,将此类信息直接合并到大规模数据集的视频数据表示中在计算上是昂贵的。无论虚拟词对不同查询的判别力如何变化,它也是静态的。为了解决这些限制,在本文中,我们建议发现查询示例所施加的时空相关性(STC),以改进视频检索的BovW模型。就不同视觉词之间的空间接近性和相对运动一致性而言,STC对于识别视觉词的辨别力至关重要。我们开发了一种新颖的技术来强调相似性度量中最具区别性的视觉单词,并将这种基于STC的方法纳入标准的倒排索引体系结构中。我们的方法在TRECVID2002和CC WEB VIDEO数据集上分别针对两种典型的QBE视频检索任务进行了评估。实验结果表明,它极大地改善了BovW模型,同时也利用了时空信息进行QBE视频检索。
展开▼