首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Transductive Unbiased Embedding for Zero-Shot Learning
【24h】

Transductive Unbiased Embedding for Zero-Shot Learning

机译:零射学习的传导性无偏嵌入

获取原文

摘要

Most existing Zero-Shot Learning (ZSL) methods have the strong bias problem, in which instances of unseen (target) classes tend to be categorized as one of the seen (source) classes. So they yield poor performance after being deployed in the generalized ZSL settings. In this paper, we propose a straightforward yet effective method named Quasi-Fully Supervised Learning (QFSL) to alleviate the bias problem. Our method follows the way of transductive learning, which assumes that both the labeled source images and unlabeled target images are available for training. In the semantic embedding space, the labeled source images are mapped to several fixed points specified by the source categories, and the unlabeled target images are forced to be mapped to other points specified by the target categories. Experiments conducted on AwA2, CUB and SUN datasets demonstrate that our method outperforms existing state-of-the-art approaches by a huge margin of 9.3 ~ 24.5% following generalized ZSL settings, and by a large margin of 0.2 ~ 16.2% following conventional ZSL settings.
机译:大多数现有的零射击学习(ZSL)方法都存在严重的偏差问题,在这种情况下,看不见的(目标)类的实例往往被归类为可见(源)类之一。因此,在将其部署到通用ZSL设置后,它们的性能会很差。在本文中,我们提出了一种简单而有效的方法,即准完全监督学习(QFSL),以缓解偏差问题。我们的方法遵循转导学习的方法,该方法假定已标记的源图像和未标记的目标图像均可用于训练。在语义嵌入空间中,将标记的源图像映射到源类别指定的几个固定点,而将未标记的目标图像强制映射到目标类别指定的其他点。在AwA2,CUB和SUN数据集上进行的实验表明,在通用ZSL设置下,我们的方法比现有的最新方法大幅度提高了9.3〜24.5%,在常规ZSL设置下大大提高了0.2〜16.2%设置。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号