In transfer learning scenarios, finding a common feature representation is crucial to tackle the problem of domain shift where the training (source domain) and test (target domain) sets have difference in their distribution. However, classical dimensionality reduction approaches such as Fisher Discriminant Analysis (FDA), are not in good yields whenever dealing with shift problem. In this paper we introduce CoMuT, a Common feature extraction in Multi-source domains for Transfer learning, that finds a common feature representation between different source and target domains. CoMuT projects the data into a latent space to reduce the drift in distributions across domains and concurrently preserves the separability between classes. CoMuT constructs the latent space in semi-supervised manner to bridge across domains and relate the different domains to each other. The projected domains have distribution similarity and classical machine learning methods can be applied on them to classify target data. Empirical results indicate that CoMuT outperforms other dimensionality reduction methods on different artificial and real datasets.
展开▼