首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Guide Subspace Learning for Unsupervised Domain Adaptation
【24h】

Guide Subspace Learning for Unsupervised Domain Adaptation

机译:指导子空间学习无监督域适应

获取原文
获取原文并翻译 | 示例

摘要

A prevailing problem in many machine learning tasks is that the training (i.e., source domain) and test data (i.e., target domain) have different distribution [i.e., non-independent identical distribution (i.i.d.)]. Unsupervised domain adaptation (UDA) was proposed to learn the unlabeled target data by leveraging the labeled source data. In this article, we propose a guide subspace learning (GSL) method for UDA, in which an invariant, discriminative, and domain-agnostic subspace is learned by three guidance terms through a two-stage progressive training strategy. First, the subspace-guided term reduces the discrepancy between the domains by moving the source closer to the target subspace. Second, the data-guided term uses the coupled projections to map both domains to a unified subspace, where each target sample can be represented by the source samples with a low-rank coefficient matrix that can preserve the global structure of data. In this way, the data from both domains can be well interlaced and the domain-invariant features can be obtained. Third, for improving the discrimination of the subspaces, the label-guided term is constructed for prediction based on source labels and pseudo-target labels. To further improve the model tolerance to label noise, a label relaxation matrix is introduced. For the solver, a two-stage learning strategy with teacher teaches and student feedbacks mode is proposed to obtain the discriminative domain-agnostic subspace. In addition, for handling nonlinear domain shift, a nonlinear GSL (NGSL) framework is formulated with kernel embedding, such that the unified subspace is imposed with nonlinearity. Experiments on various cross-domain visual benchmark databases show that our methods outperform many state-of-the-art UDA methods. The source code is available at https://github.com/Fjr9516/GSL.
机译:许多机器学习任务中的主要问题是训练(即源域)和测试数据(即,目标域)具有不同的分布[即,非独立相同分布(i.i.d.)]。提出了无监督的域适应(UDA)通过利用标记的源数据来学习未标记的目标数据。在本文中,我们向UDA提出了一个指导子空间学习(GSL)方法,其中通过两级逐步培训策略通过三个指导术语学习了不变,歧视和域名不可行的子空间。首先,子空间引导术语通过将源移到目标子空间来减少域之间的差异。其次,数据引导术语使用耦合的投影将两个域映射到统一的子空间,其中每个目标样本可以由具有能够保留全局数据结构的低秩系数矩阵的源样本来表示。以这种方式,来自两个域的数据都可以很好地隔行界限,并且可以获得域不变特征。第三,为了改善子空间的识别,基于源标签和伪目标标签的预测构建标签引导术语。为了进一步改善标签噪声的模型公差,介绍了标签松弛矩阵。对于求解器,提出了一种与教师教学和学生反馈模式的两阶段学习策略,以获得歧视域 - 不可知的子空间。另外,为了处理非线性域移位,非线性GSL(NGSL)框架被配制为核心嵌入,使得统一的子空间被非线性施加。各种跨域视觉基准数据库的实验表明,我们的方法优于许多最先进的UDA方法。源代码可在https://github.com/fjr9516/gsl上获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号