首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Discriminative Transfer Feature and Label Consistency for Cross-Domain Image Classification
【24h】

Discriminative Transfer Feature and Label Consistency for Cross-Domain Image Classification

机译:互域图像分类的辨别传递特征和标签一致性

获取原文
获取原文并翻译 | 示例

摘要

Visual domain adaptation aims to seek an effective transferable model for unlabeled target images by benefiting from the well-labeled source images following different distributions. Many recent efforts focus on extracting domain-invariant image representations via exploring target pseudo labels, predicted by the source classifier, to further mitigate the conditional distribution shift across domains. However, two essential factors are overlooked by most existing methods: 1) the learned transferable features should be not only domain invariant but also category discriminative; and 2) the target pseudo label is a two-edged sword to cross-domain alignment. In other words, the wrongly predicted target labels may hinder the class-wise domain matching. In this article, to address these two issues simultaneously, we propose a discriminative transfer feature and label consistency (DTLC) approach for visual domain adaptation problems, which can naturally unify cross-domain alignment with discriminative information preserved and label consistency of source and target data into one framework. To be specific, DTLC first incorporates class discriminative information by penalizing the maximum distance of data pair in the same class and the minimum distance of data pair sharing the different labels for each data into the distribution alignment of both domains. The target pseudo labels are then refined based on the label consistency within the domains. Thus, the transfer feature learning and coarse-to-fine target labels would be coupled to benefit each other in an iterative way. Comprehensive experiments on several visual cross-domain benchmarks verify that DTLC can gain remarkable margins over state-of-the-art (SOTA) nondeep visual domain adaptation methods and even be comparable to competitive deep domain adaptation ones.
机译:视觉域适配旨在通过从不同分布的良好标记的源图像中受益,寻找未标记的目标图像的有效可转移模型。许多最近的努力专注于通过探索由源分类器预测的目标伪标签来提取域不变图像表示,以进一步减轻跨域的条件分发偏移。然而,大多数现有方法都忽略了两个基本因素:1)学习的可转让特征不仅应不仅是域名不变,而且是类别的判别; 2)目标伪标签是一个双刃剑,用于跨域对齐。换句话说,错误地预测的目标标签可能阻碍类明智的域匹配。在本文中,要同时解决这两个问题,我们提出了一种判别转移特征和标签一致性(DTLC)方法,用于视觉域适应问题,这可以自然统一与保留的判别信息和标签源数据和目标数据的一致性的跨域对齐进入一个框架。具体而言,DTLC首先通过惩罚在同一类中的数据对的最大距离和将每个数据的数据对的最小距离分配到两个域的分布对准中的数据对的最小距离来抵御数据对的最大距离。然后基于域内的标签一致性来精制目标伪标签。因此,转移特征学习和粗略的目标标签将被耦合以以迭代方式彼此受益。在几种视觉跨域基准测试中的综合实验验证了DTLC是否可以通过最先进的(SOTA)Nondeep Visual域适应方法获得显着的边距,甚至可以与竞争深层适应性相媲美。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号