首页> 外文期刊>Image Processing, IET >Query-dependent metric learning for adaptive, content-based image browsing and retrieval
【24h】

Query-dependent metric learning for adaptive, content-based image browsing and retrieval

机译:基于查询的度量学习,用于基于内容的自适应图像浏览和检索

获取原文
获取原文并翻译 | 示例
           

摘要

Content-based image retrieval (CBIR) systems often incorporate a relevance feedback mechanism in which retrieval is adapted based on users identifying images as relevant or irrelevant. Such relevance decisions are often assumed to be category-based. However, forcing a user to decide upon category membership of an image, even when unfamiliar with a database and irrespective of context, is restrictive. An alternative is to obtain user feedback in the form of relative similarity judgments. The ability of a user to provide meaningful feedback depends on the interface that displays retrieved images and facilitates the feedback. Similarity-based 2D layouts provide context and can enable more efficient visual search. Motivated by these observations, this study describes and evaluates an interactive image browsing and retrieval approach based on relative similarity feedback obtained from 2D image layouts. It incorporates online maximal-margin learning to adapt the image similarity metric used to perform retrieval. A user starts a session by browsing a collection of images displayed in a 2D layout. He/she may choose a query image perceived to be similar to the envisioned target image. A set of images similar to the query are then returned. The user can then provide relational feedback and/or update the query image to obtain a new set of images. Algorithms for CBIR are often characterised empirically by simulating usage based on pre-defined, fixed category labels, deeming retrieved results as relevant if they share a category label with the query. In contrast, the purpose of the system in this study is to enable browsing and retrieval without predefined categories. Therefore evaluation is performed in a target-based setting by quantifying the efficiency with which target images are retrieved given initial queries.
机译:基于内容的图像检索(CBIR)系统通常包含相关性反馈机制,其中,基于用户将图像识别为相关或不相关的内容来进行检索。通常将此类相关性决策假定为基于类别。然而,即使当用户不熟悉数据库并且不考虑上下文时,也迫使用户决定图像的类别成员资格是限制性的。另一种选择是以相对相似性判断的形式获得用户反馈。用户提供有意义的反馈的能力取决于显示检索到的图像并促进反馈的界面。基于相似度的2D布局可提供上下文,并可实现更有效的视觉搜索。基于这些观察,本研究基于从2D图像布局获得的相对相似性反馈,描述和评估了交互式图像浏览和检索方法。它结合了在线最大边距学习,以适应用于执行检索的图像相似性度量。用户通过浏览以2D布局显示的图像集合来开始会话。他/她可以选择被认为与设想的目标图像相似的查询图像。然后返回一组类似于查询的图像。然后,用户可以提供关系反馈和/或更新查询图像以获得一组新图像。 CBIR算法通常通过基于预定义的固定类别标签模拟使用情况进行经验表征,如果它们与查询共享类别标签,则将检索到的结果视为相关。相反,本研究中系统的目的是在没有预定义类别的情况下实现浏览和检索。因此,在给定初始查询的情况下,通过量化检索目标图像的效率,可以在基于目标的设置中进行评估。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号