首页> 外文期刊>Journal of visual communication & image representation >Trademark image retrieval via transformation-invariant deep hashing
【24h】

Trademark image retrieval via transformation-invariant deep hashing

机译:通过变换不变的深度哈希来检索商标图像

获取原文
获取原文并翻译 | 示例

摘要

Trademark images are usually used to distinguish goods due to their uniqueness, and the amount becomes too huge to search these images accurately and fast. Most existing methods utilize conventional dense features to search visually-similar images, however, the performance and search time are not satisfactory. In this paper, we propose a unified deep hashing framework to learn the binary codes for trademark images, resulting in good performance with less search time. The unified framework integrates two types of deep convolutional networks (i.e., spatial transformer network and recurrent convolutional network) for obtaining transformation-invariant features. These features are discriminative for describing trademark images and robust to different types of transformations. The two-stream networks are followed by the hashing layer. Network parameters are learned by minimizing a sample-weighted loss, which can leverage the hard-searched images. We conduct experiments on two benchmark image sets, i.e., NPU-TM and METU, and verify the effectiveness and efficiency of our proposed approach over state-of-the-art. (C) 2019 Elsevier Inc. All rights reserved.
机译:商标图像通常由于其独特性而用于区分商品,并且商标图像的数量太大,无法准确,快速地搜索这些图像。大多数现有方法利用传统的密集特征来搜索视觉上相似的图像,但是,性能和搜索时间并不令人满意。在本文中,我们提出了一个统一的深度哈希框架来学习商标图像的二进制代码,从而以较低的搜索时间获得良好的性能。统一框架集成了两种类型的深度卷积网络(即空间变换器网络和递归卷积网络),以获得不变的变换特征。这些功能对于描述商标图像具有区分性,并且对不同类型的转换具有鲁棒性。在两个流网络之后是哈希层。通过最小化样本加权损失来学习网络参数,这可以利用经过严格搜索的图像。我们对两个基准图像集(即NPU-TM和METU)进行了实验,并验证了我们提出的方法在最新技术上的有效性和效率。 (C)2019 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号