首页> 外文期刊>Neurocomputing >Zero-shot learning with regularized cross-modality ranking
【24h】

Zero-shot learning with regularized cross-modality ranking

机译:零镜头学习,具有正规化的跨模式排名

获取原文
获取原文并翻译 | 示例

摘要

Zero-Shot Learning tries to predict the novel class samples that do not have any labeled instances in the training stage. This is typically achieved by exploring intermediate side information to transfer knowledge from seen classes to unseen testing ones. Different approaches vary in the usage of the side information and embedding methods. However, most methods only concern the relationships among different modalities but ignore to preserve the consistency among different samples in the same modality. In this paper, we propose an approach called Regularized Cross-Modality Ranking (ReCMR) to capture the semantic information from heterogeneous sources by taking both intra-modal and inter-modal semantics into consideration. Specifically, we employ the hinge ranking loss to exploit the structures among different modalities and devise efficient regularizers to constrain the variation of the samples in the identical modality. Experimental results on the popular AwA and CUB datasets show that ReCMR significantly outperforms the state-of-the-art methods. (C) 2017 Elsevier B.V. All rights reserved.
机译:零射击学习试图预测在训练阶段没有任何标记实例的新颖课堂样本。通常,这是通过探索中间辅助信息以将知识从已看到的类传递到未看到的测试类来实现的。辅助信息和嵌入方法的使用方法不同。但是,大多数方法仅关注不同模态之间的关系,而忽略了保持相同模态下不同样本之间的一致性。在本文中,我们提出了一种称为正则化跨模态排序(ReCMR)的方法,该方法通过同时考虑模态内和模态间的语义来捕获来自异构源的语义信息。具体来说,我们利用铰链排序损失来利用不同模态之间的结构,并设计有效的正则化器来约束相同模态下的样本变化。在流行的AwA和CUB数据集上的实验结果表明,ReCMR明显优于最新方法。 (C)2017 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号