Abstract Object-oriented convolutional features for fine-grained image retrieval in large surveillance datasets
首页> 外文期刊>Future generation computer systems >Object-oriented convolutional features for fine-grained image retrieval in large surveillance datasets
【24h】

Object-oriented convolutional features for fine-grained image retrieval in large surveillance datasets

机译:大型监视数据集中用于细粒度图像检索的面向对象卷积功能

获取原文
获取原文并翻译 | 示例
           

摘要

AbstractLarge scale visual surveillance generates huge volumes of data at a rapid pace, giving rise to massive image repositories. Efficient and reliable access to relevant data in these ever growing databases is a highly challenging task due to the complex nature of surveillance objects. Furthermore, inter-class visual similarity between vehicles requires extraction of fine-grained and highly discriminative features. In recent years, features from deep convolutional neural networks (CNN) have exhibited state-of-the-art performance in image retrieval. However, these features have been used without regard to their sensitivity to objects of a particular class. In this paper, we propose an object-oriented feature selection mechanism for deep convolutional features from a pre-trained CNN. Convolutional feature maps from a deep layer are selected based on the analysis of their responses to surveillance objects. The selected features serve to represent semantic features of surveillance objects and their parts with minimal influence of the background, effectively eliminating the need for background removal procedure prior to features extraction. Layer-wise mean activations from the selected features maps form the discriminative descriptor for each object. These object-oriented convolutional features (OOCF) are then projected onto low-dimensional hamming space using locality sensitive hashing approaches. The resulting compact binary hash codes allow efficient retrieval within large scale datasets. Results on five challenging datasets reveal that OOCF achieves better precision and recall than the full feature set for objects with varying backgrounds.HighlightsProposed to represent vehicle images with appropriate convolutional features.Our method reduces number of feature maps without performance degradation.Selected features yield better retrieval performance than the full feature set.
机译: 摘要 大规模视觉监视可快速生成大量数据,从而产生了大量的图像存储库。由于监视对象的复杂性,对这些不断增长的数据库中的相关数据进行有效和可靠的访问是一项极富挑战性的任务。此外,车辆之间的类间视觉相似性要求提取细粒度和高度区分性的特征。近年来,深度卷积神经网络(CNN)的功能在图像检索中表现出了最先进的性能。但是,使用这些功能时并没有考虑它们对特定类别对象的敏感性。在本文中,我们为预训练的CNN提出了一种用于深度卷积特征的面向对象特征选择机制。根据对监控对象的响应分析,从深层选择卷积特征图。所选特征用于以最小的背景影响来表示监视对象及其部分的语义特征,从而有效消除了在特征提取之前进行背景去除过程的需要。来自所选特征图的逐层均值激活形成每个对象的判别描述符。然后使用局部敏感的哈希方法将这些面向对象的卷积特征(OOCF)投影到低维汉明空间上。生成的紧凑型二进制哈希码允许在大规模数据集中高效检索。五个具有挑战性的数据集的结果表明,对于背景不同的对象,OOCF比完整功能集具有更高的精度和召回率。 突出显示 建议表示具有适当卷积特征的车辆图像。 我们的方法在不降低性能的情况下减少了特征图的数量。 选定的特征会产生更好的检索性能

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号