首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Heterogeneous Multitask Metric Learning Across Multiple Domains
【24h】

Heterogeneous Multitask Metric Learning Across Multiple Domains

机译:跨多个领域的异构多任务度量学习

获取原文
获取原文并翻译 | 示例

摘要

Distance metric learning plays a crucial role in diverse machine learning algorithms and applications. When the labeled information in a target domain is limited, transfer metric learning (TML) helps to learn the metric by leveraging the sufficient information from other related domains. Multitask metric learning (MTML), which can be regarded as a special case of TML, performs transfer across all related domains. Current TML tools usually assume that the same feature representation is exploited for different domains. However, in real-world applications, data may be drawn from heterogeneous domains. Heterogeneous transfer learning approaches can be adopted to remedy this drawback by deriving a metric from the learned transformation across different domains. However, they are often limited in that only two domains can be handled. To appropriately handle multiple domains, we develop a novel heterogeneous MTML (HMTML) framework. In HMTML, the metrics of all different domains are learned together. The transformations derived from the metrics are utilized to induce a common subspace, and the high-order covariance among the predictive structures of these domains is maximized in this subspace. There do exist a few heterogeneous transfer learning approaches that deal with multiple domains, but the high-order statistics (correlation information), which can only be exploited by simultaneously examining all domains, is ignored in these approaches. Compared with them, the proposed HMTML can effectively explore such high-order information, thus obtaining more reliable feature transformations and metrics. Effectiveness of our method is validated by the extensive and intensive experiments on text categorization, scene classification, and social image annotation.
机译:距离度量学习在各种机器学习算法和应用程序中扮演着至关重要的角色。当目标域中的标记信息受到限制时,传输度量学习(TML)通过利用来自其他相关域的足够信息来帮助学习度量。多任务度量学习(MTML)可被视为TML的特例,它跨所有相关域执行传输。当前的TML工具通常假定相同的特征表示可用于不同的领域。但是,在实际应用中,数据可能来自异构域。通过从跨不同领域的学习转换中得出度量,可以采用异构转移学习方法来弥补这一缺陷。但是,它们通常受到限制,因为只能处理两个域。为了适当地处理多个域,我们开发了一种新颖的异构MTML(HMTML)框架。在HMTML中,所有不同领域的指标都是一起学习的。从度量得出的变换被用于引入一个公共子空间,并且在这些子空间中这些域的预测结构之间的高阶协方差被最大化。确实存在一些处理多个域的异构转移学习方法,但是在这些方法中,只能通过同时检查所有域才能利用的高阶统计量(相关信息)。与它们相比,提出的HMTML可以有效地探索此类高阶信息,从而获得更可靠的特征转换和度量。我们的方法的有效性通过文本分类,场景分类和社交图像注释的广泛而深入的实验得到了验证。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号