首页> 外文期刊>Knowledge-Based Systems >Learning unseen visual prototypes for zero-shot classification
【24h】

Learning unseen visual prototypes for zero-shot classification

机译:学习看不见的视觉原型进行零镜头分类

获取原文
获取原文并翻译 | 示例

摘要

The number of object classes is increasing rapidly leading to the recognition of new classes difficult. Zero-shot learning aims to predict the labels of the new class samples by using the seen class samples and their semantic representations. In this paper, we propose a simple method to learn the unseen visual prototypes (LUVP) by learning the projection function from semantic space to visual feature space to reduce hubness problem. We exploit the class level samples rather than instance level samples, which can alleviate expensive computational costs. Since the disjointness of seen and unseen classes, directly applying the projection function to unseen samples will cause a domain shift problem. Thus, we preserve the unseen label semantic correlations and then adjust the unseen visual prototypes to minimize the domain shift problem. We demonstrate through extensive experiments that the proposed method (1) alleviates the hubness problem, (2) overcomes the domain shift problem, and (3) significantly outperforms existing methods for zero-shot classification on five benchmark datasets.
机译:对象类的数量迅速增加,导致难以识别新类。零镜头学习旨在通过使用看到的类样本及其语义表示来预测新类样本的标签。在本文中,我们提出了一种简单的方法,通过学习从语义空间到视觉特征空间的投影函数来减少看不见的视觉问题,从而学习看不见的视觉原型(LUVP)。我们利用类级别的样本而不是实例级别的样本,这可以减轻昂贵的计算成本。由于可见类和看不见类的不相交,直接将投影函数应用于看不见的样本将导致域移位问题。因此,我们保留了看不见的标签语义相关性,然后调整了看不见的视觉原型,以最大程度地减少域移位问题。通过广泛的实验,我们证明了所提出的方法(1)减轻了中心性问题,(2)克服了域偏移问题,并且(3)在五个基准数据集上明显优于现有的零击分类方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号