首页> 外文期刊>ACM transactions on intelligent systems >Supervised Representation Learning with Double Encoding-Layer Autoencoder for Transfer Learning
【24h】

Supervised Representation Learning with Double Encoding-Layer Autoencoder for Transfer Learning

机译:带双层编码自动编码器的监督表示学习,用于迁移学习

获取原文
获取原文并翻译 | 示例

摘要

Transfer learning has gained a lot of attention and interest in the past decade. One crucial research issue in transfer learning is how to find a good representation for instances of different domains such that the divergence between domains can be reduced with the new representation. Recently, deep learning has been proposed to learn more robust or higher-level features for transfer learning. In this article, we adapt the autoencoder technique to transfer learning and propose a supervised representation learning method based on double encoding-layer autoencoder. The proposed framework consists of two encoding layers: one for embedding and the other one for label encoding. In the embedding layer, the distribution distance of the embedded instances between the source and target domains is minimized in terms of KL-Divergence. In the label encoding layer, label information of the source domain is encoded using a softmax regression model. Moreover, to empirically explore why the proposed framework can work well for transfer learning, we propose a new effective measure based on autoencoder to compute the distribution distance between different domains. Experimental results show that the proposed new measure can better reflect the degree of transfer difficulty and has stronger correlation with the performance from supervised learning algorithms (e.g., Logistic Regression), compared with previous ones, such as KL-Divergence and Maximum Mean Discrepancy. Therefore, in our model, we have incorporated two distribution distance measures to minimize the difference between source and target domains in the embedding representations. Extensive experiments conducted on three real-world image datasets and one text data demonstrate the effectiveness of our proposed method compared with several state-of-the-art baseline methods.
机译:在过去的十年中,转学已经引起了广泛的关注和兴趣。迁移学习中的一个关键研究问题是如何为不同领域的实例找到良好的表示形式,以便使用新的表示形式可以减少领域之间的差异。近来,已经提出了深度学习以学习更健壮或更高级的用于转移学习的特征。在本文中,我们将自动编码器技术应用于传递学习,并提出了一种基于双重编码层自动编码器的监督表示学习方法。提出的框架包括两个编码层:一个用于嵌入,另一个用于标签编码。在嵌入层中,根据KL-散度,将嵌入实例在源域和目标域之间的分布距离最小化。在标签编码层中,使用softmax回归模型对源域的标签信息进行编码。此外,为了从经验上探索提出的框架为何可以很好地用于迁移学习,我们提出了一种基于自动编码器的新有效措施,可以计算不同域之间的分配距离。实验结果表明,与KL-散度和最大均值差异等以前的方法相比,该新方法可以更好地反映转移难度,并且与监督学习算法(例如Logistic回归)的性能具有更强的相关性。因此,在我们的模型中,我们结合了两种分布距离度量,以最小化嵌入表示中源域和目标域之间的差异。与三个最先进的基线方法相比,在三个真实世界的图像数据集和一个文本数据上进行的大量实验证明了我们提出的方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号