首页> 外文期刊>Image and Vision Computing >A novel unsupervised Globality-Locality Preserving Projections in transfer learning
【24h】

A novel unsupervised Globality-Locality Preserving Projections in transfer learning

机译:迁移学习中一种新颖的无监督全局-局部保留投影

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, a novel unsupervised dimensionality reduction algorithm, unsupervised Globality-Locality Preserving Projections in Transfer Learning (UGLPTL) is proposed, based on the conventional Globality-Locality Preserving dimensionality reduction algorithm (GLPP) that does not work well in real-world Transfer Learning (TL) applications. In TL applications, one application (source domain) contains sufficient labeled data, but the related application contains only unlabeled data (target domain). Compared to the existing TL methods, our proposed method incorporates all the objectives, such as minimizing the marginal and conditional distributions between both the domains, maximizing the variance of the target domain, and performing Geometrical Diffusion on Manifolds, all of which are essential for transfer learning applications. UGLPTL seeks a projection vector that projects the source and the target domains data into a common subspace where both the labeled source data and the unlabeled target data can be utilized to perform dimensionality reduction. Comprehensive experiments have verified that the proposed method outperforms many state-of-the-art non-transfer learning and transfer learning methods on two popular real-world cross-domain visual transfer learning data sets. Our proposed UGLPTL approach achieved 82.18% and 87.14% mean accuracies over all the tasks of PIE Face and Office-Caltech data sets, respectively. (C) 2019 Elsevier B.V. All rights reserved.
机译:本文提出了一种新的无监督降维算法,即基于转移学习的无监督降维算法(UGLPTL)。学习(TL)应用程序。在TL应用程序中,一个应用程序(源域)包含足够的标记数据,而相关应用程序仅包含未标记数据(目标域)。与现有的TL方法相比,我们提出的方法结合了所有目标,例如最小化两个域之间的边际和条件分布,最大化目标域的方差以及对流形执行几何扩散,所有这些对于转移都是必不可少的学习应用程序。 UGLPTL寻求一种投影向量,该向量将源域和目标域数据投影到一个公共子空间中,在该子空间中可以同时使用标记的源数据和未标记的目标数据来进行降维。全面的实验证明,该方法在两个流行的现实世界跨域可视化迁移学习数据集上优于许多最新的非迁移学习和迁移学习方法。我们提出的UGLPTL方法在PIE Face和Office-Caltech数据集的所有任务上均分别达到82.18%和87.14%的平均准确度。 (C)2019 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号