首页> 外文期刊>IEEE Transactions on Circuits and Systems for Video Technology >Real-time compressed-domain spatiotemporal segmentation and ontologies for video indexing and retrieval
【24h】

Real-time compressed-domain spatiotemporal segmentation and ontologies for video indexing and retrieval

机译:用于视频索引和检索的实时压缩域时空分割和本体

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

In this paper, a novel algorithm is presented for the real-time, compressed-domain, unsupervised segmentation of image sequences and is applied to video indexing and retrieval. The segmentation algorithm uses motion and color information directly extracted from the MPEG-2 compressed stream. An iterative rejection scheme based on the bilinear motion model is used to effect foreground/background segmentation. Following that, meaningful foreground spatiotemporal objects are formed by initially examining the temporal consistency of the output of iterative rejection, clustering the resulting foreground macroblocks to connected regions and finally performing region tracking. Background segmentation to spatiotemporal objects is additionally performed. MPEG-7 compliant low-level descriptors describing the color, shape, position, and motion of the resulting spatiotemporal objects are extracted and are automatically mapped to appropriate intermediate-level descriptors forming a simple vocabulary termed object ontology. This, combined with a relevance feedback mechanism, allows the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) and the retrieval of relevant video segments. Desired spatial and temporal relationships between the objects in multiple-keyword queries can also be expressed, using the shot ontology. Experimental results of the application of the segmentation algorithm to known sequences demonstrate the efficiency of the proposed segmentation approach. Sample queries reveal the potential of employing this segmentation algorithm as part of an object-based video indexing and retrieval scheme.
机译:本文提出了一种新颖的算法,用于图像序列的实时,压缩域,无监督分割,并将其应用于视频索引和检索。分割算法使用直接从MPEG-2压缩流中提取的运动和颜色信息。基于双线性运动模型的迭代拒绝方案用于实现前景/背景分割。随后,通过首先检查迭代拒绝的输出的时间一致性,将所得的前景宏块聚类到连接的区域并最终执行区域跟踪,来形成有意义的前景时空对象。还对时空对象进行背景分割。提取描述所得时空对象的颜色,形状,位置和运动的符合MPEG-7的低级描述符,并将其自动映射到适当的中级描述符,从而形成称为对象本体的简单词汇。结合相关性反馈机制,可以对用户查询的高级概念(语义对象,每个由关键字表示)进行定性定义,并检索相关视频段。使用射击本体,还可以表达多关键字查询中对象之间所需的空间和时间关系。将分割算法应用于已知序列的实验结果证明了该分割方法的有效性。示例查询揭示了将这种分割算法用作基于对象的视频索引和检索方案的一部分的潜力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号