首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >A Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts
【24h】

A Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts

机译:一种从嘈杂文本中零接触学习的生成对抗方法

获取原文

摘要

Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.
机译:现有的大多数零击学习方法都将该问题视为一种视觉语义嵌入方法。考虑到生成对抗网络(GAN)生成图像的能力,我们改为利用GAN想象文本描述中看不见的类别,从而识别新颖的类别,而无需查看示例。具体来说,我们提出了一个简单而有效的生成模型,该模型将关于一个看不见的班级(例如,维基百科的文章)的嘈杂文本描述作为输入,并为该班级生成综合的视觉特征。通过添加伪数据,零镜头学习自然可以转换为传统的分类问题。另外,为了保留所生成特征的类间区别,建议使用视觉枢轴正则化作为显式监视。与以前使用复杂的工程正则器的方法不同,我们的方法无需额外的正则化就可以很好地抑制噪声。从经验上讲,我们证明了我们的方法在基于文本的零击学习的最大可用基准上始终优于最新技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号