首页> 外文期刊>Image Processing, IEEE Transactions on >Visual-Textual Joint Relevance Learning for Tag-Based Social Image Search
【24h】

Visual-Textual Joint Relevance Learning for Tag-Based Social Image Search

机译:基于标签的社交图像搜索的视觉文本联合相关性学习

获取原文
获取原文并翻译 | 示例
       

摘要

Due to the popularity of social media websites, extensive research efforts have been dedicated to tag-based social image search. Both visual information and tags have been investigated in the research field. However, most existing methods use tags and visual characteristics either separately or sequentially in order to estimate the relevance of images. In this paper, we propose an approach that simultaneously utilizes both visual and textual information to estimate the relevance of user tagged images. The relevance estimation is determined with a hypergraph learning approach. In this method, a social image hypergraph is constructed, where vertices represent images and hyperedges represent visual or textual terms. Learning is achieved with use of a set of pseudo-positive images, where the weights of hyperedges are updated throughout the learning process. In this way, the impact of different tags and visual words can be automatically modulated. Comparative results of the experiments conducted on a dataset including $370+{rm images}$ are presented, which demonstrate the effectiveness of the proposed approach.
机译:由于社交媒体网站的普及,已针对基于标签的社交图像搜索进行了广泛的研究。视觉信息和标签都已经在研究领域中进行了研究。但是,大多数现有方法分别或顺序使用标签和视觉特征,以估计图像的相关性。在本文中,我们提出了一种同时利用视觉和文本信息来估计用户标记图像的相关性的方法。相关性估计是通过超图学习方法确定的。在这种方法中,构建了社交图像超图,其中顶点表示图像,超边缘表示视觉或文字术语。通过使用一组伪正像来实现学习,其中超边的权重在整个学习过程中都会更新。这样,可以自动调节不同标签和视觉单词的影响。给出了在包含 $ 370+ {rm images} $ 的数据集上进行的实验的比较结果,这些结果证明了该方法的有效性。建议的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号