首页> 外文期刊>IEEE Transactions on Image Processing >Deep Unbiased Embedding Transfer for Zero-Shot Learning
【24h】

Deep Unbiased Embedding Transfer for Zero-Shot Learning

机译:零射击学习的深度无偏见嵌入转移

获取原文
获取原文并翻译 | 示例

摘要

Zero-shot learning aims to recognize objects which do not appear in the training dataset. Previous prevalent mapping-based zero-shot learning methods suffer from the projection domain shift problem due to the lack of image classes in the training stage. In order to alleviate the projection domain shift problem, a deep unbiased embedding transfer (DUET) model is proposed in this paper. The DUET model is composed of a deep embedding transfer (DET) module and an unseen visual feature generation (UVG) module. In the DET module, a novel combined embedding transfer net which integrates the complementary merits of the linear and nonlinear embedding mapping functions is proposed to connect the visual space and semantic space. Whats more, the end-to-end joint training process is implemented to train the visual feature extractor and the combined embedding transfer net simultaneously. In the UVG module, a visual feature generator trained with a conditional generative adversarial framework is used to synthesize the visual features of the unseen classes to ease the disturbance of the projection domain shift problem. Furthermore, a quantitative index, namely the score of resistance on domain shift (ScoreRDS), is proposed to evaluate different models regarding their resistance capability on the projection domain shift problem. The experiments on five zero-shot learning benchmarks verify the effectiveness of the proposed DUET model. As demonstrated by the qualitative and quantitative analysis, the unseen class visual feature generation, the combined embedding transfer net and the end-to-end joint training process all contribute to alleviating projection domain shift in zero-shot learning.
机译:零拍学习旨在识别训练数据集中不出现的对象。以前的普遍存在映射的零射击学习方法由于训练阶段缺乏图像类而受到投影域移位问题。为了减轻投影域移位问题,本文提出了深度无偏的嵌入转移(二重词)模型。 Duet模型由深嵌入传输(DET)模块和未经看的视觉特征生成(UVG)模块组成。在DED模块中,建议将新的组合嵌入转移网集成了线性和非线性嵌入映射函数的互补优点,以连接视觉空间和语义空间。更重要的是,实现了端到端的联合培训过程,以便同时培训视觉特征提取器和组合的嵌入传输网。在UVG模块中,用条件生成的对抗框架训练的视觉特征发生器用于合成未经概述类的视觉特征,以缓解投影域移位问题的干扰。此外,提出了定量指标,即域移位(射手)对域移位(分光器)的分数来评估关于它们对投影域移位问题的电阻能力的不同模型。五个零射击学习基准测试的实验验证了拟议二重批模型的有效性。正如定性和定量分析所示,未经看的类视觉特征生成,组合的嵌入转移网和端到端的联合训练过程都有助于缓解零射击学习的投影域移位。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号