首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >Spatial-Content Image Search in Complex Scenes
【24h】

Spatial-Content Image Search in Complex Scenes

机译:复杂场景中的空间内容图像搜索

获取原文

摘要

Although the topic of image search has been heavily studied in the last two decades, many works have focused on either instance-level retrieval or semantic-level retrieval. In this work, we develop a novel visually similar spatial-semantic method, namely spatial-content image search, to search images that not only share the same spatial-semantics but also enjoy visual consistency as the query image in complex scenes. We achieve the goal by capturing spatial-semantic concepts as well as the visual representation of each concept contained in an image. Specifically, we first generate a set of bounding boxes and their category labels representing spatial-semantic constraints with YOLOV3, and then obtain visual content of each bounding box with deep features extracted from a convolutional neural network. After that, we customize a similarity computation method that evaluates the relevance between dataset images and input queries according to the developed image representations. Experimental results on two large-scale benchmark retrieval datasets with images consisting of multiple objects demonstrate that our method provides an effective way to query image databases. Our code is available at https://github.com/MaJinWakeUp/spatial-content.
机译:尽管在过去的二十年中对图像搜索的主题进行了深入研究,但是许多作品都集中在实例级检索或语义级检索上。在这项工作中,我们开发了一种新颖的视觉上相似的空间语义方法,即空间内容图像搜索,以搜索不仅具有相同空间语义的图像,而且在复杂场景中还具有与查询图像相同的视觉一致性。我们通过捕获空间语义概念以及图像中包含的每个概念的视觉表示来实现该目标。具体来说,我们首先使用YOLOV3生成一组边界框及其代表空间语义约束的类别标签,然后获取具有从卷积神经网络提取的深层特征的每个边界框的可视内容。之后,我们自定义一种相似度计算方法,该方法根据开发的图像表示来评估数据集图像和输入查询之间的相关性。在两个包含多个对象的图像的大型基准检索数据集上的实验结果表明,我们的方法提供了一种查询图像数据库的有效方法。我们的代码可从https://github.com/MaJinWakeUp/spatial-content获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号