...
首页> 外文期刊>Computational intelligence and neuroscience >Learning Domain-Independent Deep Representations by Mutual Information Minimization
【24h】

Learning Domain-Independent Deep Representations by Mutual Information Minimization

机译:通过相互信息最小化学习域独立的深度表示

获取原文
           

摘要

Domain transfer learning aims to learn common data representations from a source domain and a target domain so that the source domain data can help the classification of the target domain. Conventional transfer representation learning imposes the distributions of source and target domain representations to be similar, which heavily relies on the characterization of the distributions of domains and the distribution matching criteria. In this paper, we proposed a novel framework for domain transfer representation learning. Our motive is to make the learned representations of data points independent from the domains which they belong to. In other words, from an optimal cross-domain representation of a data point, it is difficult to tell which domain it is from. In this way, the learned representations can be generalized to different domains. To measure the dependency between the representations and the corresponding domain which the data points belong to, we propose to use the mutual information between the representations and the domain-belonging indicators. By minimizing such mutual information, we learn the representations which are independent from domains. We build a classwise deep convolutional network model as a representation model and maximize the margin of each data point of the corresponding class, which is defined over the intraclass and interclass neighborhood. To learn the parameters of the model, we construct a unified minimization problem where the margins are maximized while the representation-domain mutual information is minimized. In this way, we learn representations which are not only discriminate but also independent from domains. An iterative algorithm based on the Adam optimization method is proposed to solve the minimization to learn the classwise deep model parameters and the cross-domain representations simultaneously. Extensive experiments over benchmark datasets show its effectiveness and advantage over existing domain transfer learning methods.
机译:域传输学习旨在从源域和目标域学习公共数据表示,以便源域数据可以帮助目标域的分类。传统的转移表示学习赋予源域表示的分布和目标域表示类似,其严重依赖于域分布的表征和分布匹配标准。在本文中,我们提出了一种用于域转移代表学习的新框架。我们的动机是使学习的数据点的表示,与他们所属的域名独立于此。换句话说,从数据点的最佳跨域表示,难以判断它是来自哪个域。以这种方式,学习的表示可以概括为不同的域。要测量数据点所属的表示和相应域之间的依赖性,我们建议使用表示和域属于指标之间的互信息。通过最大限度地减少这种互信息,我们学习独立于域的表示。我们构建一个同类卷积网络模型作为表示模型,并最大化相应类的每个数据点的裕度,这在intraclass和杂交邻域中定义。为了了解模型的参数,我们构造一个统一的最小化问题,其中边距最大化,而表示域相互信息被最小化。通过这种方式,我们学习不仅歧视但也独立于域名的陈述。提出了一种基于ADAM优化方法的迭代算法来解决最小化以同时学习Corporise深模型参数和跨域表示。基准数据集的广泛实验显示了现有域转移学习方法的有效性和优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号