首页> 外文会议>Conference on Information and Knowledge Technology >Common feature extraction in multi-source domains for transfer learning
【24h】

Common feature extraction in multi-source domains for transfer learning

机译:多源域中的公共特征提取,用于迁移学习

获取原文

摘要

In transfer learning scenarios, finding a common feature representation is crucial to tackle the problem of domain shift where the training (source domain) and test (target domain) sets have difference in their distribution. However, classical dimensionality reduction approaches such as Fisher Discriminant Analysis (FDA), are not in good yields whenever dealing with shift problem. In this paper we introduce CoMuT, a Common feature extraction in Multi-source domains for Transfer learning, that finds a common feature representation between different source and target domains. CoMuT projects the data into a latent space to reduce the drift in distributions across domains and concurrently preserves the separability between classes. CoMuT constructs the latent space in semi-supervised manner to bridge across domains and relate the different domains to each other. The projected domains have distribution similarity and classical machine learning methods can be applied on them to classify target data. Empirical results indicate that CoMuT outperforms other dimensionality reduction methods on different artificial and real datasets.
机译:在转移学习方案中,找到通用的特征表示对于解决领域转移的问题至关重要,在领域转移中,训练(源域)和测试(目标域)集的分布有所不同。然而,诸如Fisher Discriminant Analysis(FDA)之类的经典降维方法在处理移位问题时收率并不高。在本文中,我们介绍CoMuT,这是多源领域中用于迁移学习的通用特征提取,它可以找到不同源域和目标域之间的通用特征表示。 CoMuT将数据投影到一个潜在的空间中,以减少跨域分布的漂移,并同时保留类之间的可分离性。 CoMuT以半监督的方式构造潜在空间,以跨域桥接并将不同的域彼此关联。投影域具有分布相似性,可以将经典的机器学习方法应用于其上以对目标数据进行分类。实验结果表明,在不同的人工和真实数据集上,CoMuT的性能优于其他降维方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号