首页> 外文期刊>International Journal of Pattern Recognition and Artificial Intelligence >VIDEO RETRIEVAL VIA LEARNING COLLABORATIVE SEMANTIC DISTANCE
【24h】

VIDEO RETRIEVAL VIA LEARNING COLLABORATIVE SEMANTIC DISTANCE

机译:通过学习协作语义距离检索视频

获取原文
获取原文并翻译 | 示例

摘要

Graph-based semi-supervised learning approaches have been proven effective and efficient in solving the problem of the inefficiency of labeled data in many real-world application areas, such as video annotation. However, the pairwise similarity metric, a significant factor of existing approaches, has not been fully investigated. That is, these graph-based semi-supervised approaches estimate the pairwise similarity between samples mainly according to the spatial property of video data. On the other hand, temporal property, an essential characteristic of video data, is not embedded into the pairwise similarity measure. Accordingly, a novel framework for video annotation, called Joint Spatio-Temporal Correlation Learning (JSTCL), is proposed in this paper. This framework is characterized by simultaneously taking into account the spatial and temporal property of video data to achieve more accurate pairwise similarity values. We apply the proposed framework to video annotation and report superior performance compared to key existing approaches over the benchmark TRECVID data set.
机译:已经证明,基于图的半监督学习方法可以有效解决许多实际应用领域(例如视频注释)中标记数据的效率低下的问题。但是,成对相似性度量是现有方法的重要因素,尚未得到充分研究。即,这些基于图的半监督方法主要根据视频数据的空间特性来估计样本之间的成对相似性。另一方面,时间属性是视频数据的基本特征,并未嵌入成对相似性度量中。因此,本文提出了一种新的视频注释框架,称为时空关联学习(JSTCL)。该框架的特征是同时考虑视频数据的空间和时间特性,以实现更准确的成对相似度值。与基准TRECVID数据集上的关键现有方法相比,我们将建议的框架应用于视频注释并报告卓越的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号