首页> 外文期刊>Neurocomputing >Zero-shot learning with self-supervision by shuffling semantic embeddings
【24h】

Zero-shot learning with self-supervision by shuffling semantic embeddings

机译:通过混洗语义嵌入来自我监督零拍摄学习

获取原文
获取原文并翻译 | 示例

摘要

Zero-shot learning and self-supervised learning have been widely studied due to the advantage of performing representation learning in a data shortage situation efficiently. However, few studies consider zero-shot learning using semantic embeddings (e.g., CNN features or attributes) and self-supervision simultaneously. The reason is that most zero-shot learning works employ vector-level semantic embed dings. However, most self-supervision studies only consider image-level domains, so a novel self supervision method for vector-level CNN features is needed. We propose a simple way to shuffle semantic embeddings. Furthermore, we propose a method to enrich feature representation and improve zero shot learning performance effectively. We show that our model outperforms current state-of-the-art methods on the large-scale ImageNet 21K and the small-scale CUB and SUN datasets.(c) 2021 Elsevier B.V. All rights reserved.Zero-shot learning and self-supervised learning have been widely studied due to the advantage of performing representation learning in a data shortage situation efficiently. However, few studies consider zero-shot learning using semantic embeddings (e.g., CNN features or attributes) and self-supervision simultaneously. The reason is that most zero-shot learning works employ vector-level semantic embeddings. However, most self-supervision studies only consider image-level domains, so a novel selfsupervision method for vector-level CNN features is needed. We propose a simple way to shuffle semantic embeddings. Furthermore, we propose a method to enrich feature representation and improve zeroshot learning performance effectively. We show that our model outperforms current state-of-the-art methods on the large-scale ImageNet 21K and the small-scale CUB and SUN datasets.
机译:由于有效地在数据短缺情况下进行了表现学习的优点,零射学学习和自我监督学习已被广​​泛研究。然而,很少有研究考虑了零射门使用语义嵌入物(例如,CNN特征或属性)和自我监督学习的同时。原因是大多数零射击学习作品采用传染媒介级语义嵌入斑点。然而,大多数自我监督研究只考虑图像级域,所以需要一种用于矢量级CNN特征的新型自我监督方法。我们提出了一种简单的方法来洗牌语义嵌入。此外,我们建议以丰富功能表示的方法,有效地提高零射门的学习表现。我们表明我们的模型在大型ImageNet 21K和小规模幼崽和Sun数据集上优于当前最先进的方法。(c)2021 Elsevier BV所有权利保留。射击学习和自我监督由于在有效地在数据短缺情况下执行了代表学习的优点,学习已被广​​泛研究。然而,很少有研究考虑使用语义嵌入(例如,CNN特征或属性)和同时自我监督的零射击学习。原因是大多数零射击学习作品使用矢量级语义嵌入。然而,大多数自我监督研究只考虑图像级域,所以需要一种用于向量级CNN特征的新型自体化方法。我们提出了一种简单的方法来洗牌语义嵌入。此外,我们提出了一种方法来丰富特征表示,并有效地提高零学习性能。我们表明我们的模型在大型Imagenet 21K和小规模幼崽和太阳数据集上占据了当前最先进的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号