...
首页> 外文期刊>Neurocomputing >Self-constraining and attention-based hashing network for bit-scalable cross-modal retrieval
【24h】

Self-constraining and attention-based hashing network for bit-scalable cross-modal retrieval

机译:用于位可扩展跨模型检索的自限制和基于关注的散列网络

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Recently deep cross-modal hashing (CMH) have received increased attention in multimedia information retrieval, as it is able to combine the benefit from the low storage cost and search efficiency of hashing, and the strong capabilities of feature abstraction by deep neural networks. CMH can effectively integrates hash representation learning and hash function optimization into an end-to-end framework. However, most of existing deep cross-modal hashing methods use a one-size-fits-all high level representation resulting in the loss of the spatial information of data. Also, previous methods mostly generated fixed length hashing codes. Here, the significance level of different bits are equally weighted thereby restricting their practical flexibility. To address these issues, we propose a self-constraining and attention-based hashing network (SCAHN) for bit-scalable cross-modal hashing. SCAHN integrates the label constraints from early and late-stages as well as their fused features into the hash representation and hash function learning. Moreover, as the fusion of early and late-stages features is based on an attention mechanism, each bit of the hashing codes can be unequally weighted so that the code lengths can be manipulated by ranking the significance of each bit without extra hash-model training. Extensive experiments conducted on four benchmark datasets demonstrate that our proposed SCAHN outperforms the current state-of-the-art CMH methods. Moreover, it is also shown that the generated bit-scalable hashing codes well-preserve the discriminative power with varying code lengths and obtain competitive results comparing to the state-of-the-art. (C) 2020 Elsevier B.V. All rights reserved.
机译:最近深度跨模型散列(CMH)在多媒体信息检索中受到了更多的关注,因为它能够将受益从低储存成本和散列效率的效率结合起来,以及深神经网络的特征抽象的强大功能。 CMH可以有效地将哈希表示学习和哈希函数优化集成到端到端框架中。然而,大多数的深度跨模型散列方法使用单尺寸适合 - 所有高级表示,导致数据空间信息丢失。此外,先前的方法主要产生固定长度散列码。这里,不同比特的显着性水平同样加权,从而限制了它们的实际灵活性。为了解决这些问题,我们提出了一个自我限制和基于关注的散列网络(Scahn),用于位可扩展的跨模型散列。 Scahn将标签限制与早期和后期的标签约束以及它们的融合功能集成到哈希表示和哈希函数学习中。此外,随着早期和后期特征的融合基于注意机制,散列代码的每个位可以不等,使得可以通过在没有额外的散列模型训练的情况下对每个比特的重要性进行排序来操纵代码长度。在四个基准数据集上进行的广泛实验表明,我们所提出的Scahn优于目前最先进的CMH方法。此外,还示出了产生的位可伸缩散列代码良好地保护具有不同码长度的辨别力,并获得与最先进的竞争结果比较。 (c)2020 Elsevier B.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号