首页> 外文会议>ACM SIGMM international workshop on Multimedia information retrieval >Mutual relevance feedback for multimodal query formulation in video retrieval
【24h】

Mutual relevance feedback for multimodal query formulation in video retrieval

机译:互相关反馈用于视频检索中的多模式查询制定

获取原文

摘要

Video indexing and retrieval systems allow users to find relevant video segments for a given information need. A multimodal video index may include speech indices, a text-from-screen (OCR) index, semantic visual concepts, content-based image features, audio features and more. Formulating an efficient multimodal query for a given information need is much less intuitive and more challenging for the user than of composing a text query in document search. This paper describes a video retrieval system that uses mutual relevance feedback for multimodal query formulation. Through an iterative search and browse session, the user provides relevance feedback on system's output and the system provides the user a mutual feedback which leads to better query and better retrieval results. Official evaluation at the NIST TRECVID 2004 Search Task is provided for both Manual and Interactive search. It is shown that in the Manual task the queries result from the mutual feedback on the training data significantly improve the retrieval performances. A further improvement over the manual search is achieved in the interactive task by using both browsing and mutual feedback on the test set.
机译:视频索引和检索系统允许用户找到给定信息需求的相关视频段。多模式视频索引可以包括语音索引,屏幕文本(OCR)索引,语义视觉概念,基于内容的图像功能,音频功能等。与在文档搜索中组成文本查询相比,为给定的信息需求制定有效的多模式查询对用户而言不那么直观,而且更具挑战性。本文介绍了一种视频检索系统,该系统使用相互关联的反馈进行多模式查询表述。通过迭代的搜索和浏览会话,用户可以提供有关系统输出的相关性反馈,并且系统可以为用户提供相互反馈,从而导致更好的查询和更好的检索结果。 NIST TRECVID 2004搜索任务的官方评估适用于手动和交互式搜索。结果表明,在手动任务中,对训练数据的相互反馈产生的查询结果显着提高了检索性能。通过使用测试集上的浏览和相互反馈,在交互式任务中实现了对手动搜索的进一步改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号