首页> 外文期刊>IEEE Transactions on Knowledge and Data Engineering >Leveraging Implicit Relative Labeling-Importance Information for Effective Multi-Label Learning
【24h】

Leveraging Implicit Relative Labeling-Importance Information for Effective Multi-Label Learning

机译:利用隐含的相对标签 - 重要信息,以实现有效的多标签学习

获取原文
获取原文并翻译 | 示例

摘要

Multi-label learning deals with training examples each represented by a single instance while associated with multiple class labels, and the task is to train a predictive model which can assign a set of proper labels for the unseen instance. Existing approaches employ the common assumption of equal labeling-importance, i.e., all associated labels are regarded to be relevant to the training instance while their relative importance in characterizing its semantics are not differentiated. Nonetheless, this common assumption does not reflect the fact that the importance degree of each relevant label is generally different, though the importance information is not directly accessible from the training examples. In this article, we show that it is beneficial to leverage the implicit relative labeling-importance (RLI) information to help induce multi-label predictive model with strong generalization performance. Specifically, RLI degrees are formalized as multinomial distribution over the label space, which can be estimated by either global label propagation procedure or local k-nearest neighbor reconstruction. Correspondingly, the multi-label predictive model is induced by fitting modeling outputs with estimated RLI degrees along with multi-label empirical loss regularization. Extensive experiments clearly validate that leveraging implicit RLI information serves as a favorable strategy to achieve effective multi-label learning.
机译:多标签学习处理训练示例,每个训练示例由单个实例表示,同时与多个类标签关联,任务是培训一个可以为未经调整的实例分配一组正确标签的预测模型。现有方法采用相同标签的共同假设 - 重要性,即,所有相关标签都被视为与培训实例相关的,同时其对表征其语义的相对重要性并不区分。尽管如此,这种共同的假设并不反映了每个相关标签的重要程度通常不同,但不可从训练示例直接访问重要信息。在本文中,我们表明利用隐式相对标签 - 重要性(RLI)信息是有益的,以帮助诱导具有强大泛化性能的多标签预测模型。具体地,RLI度被形式化为在标签空间上的多项分布,这可以通过全局标签传播过程或本地K-最近邻重建来估计。相应地,通过拟合估计的RLI度的建模输出以及多标签经验损失正则化来引导多标签预测模型。广泛的实验清楚地验证,利用隐式RLI信息作为实现有效的多标签学习的有利策略。

著录项

  • 来源
  • 作者单位

    Southeast Univ Sch Comp Sci & Engn Nanjing 210096 Peoples R China|Southeast Univ Minist Educ Key Lab Comp Network & Informat Integrat Nanjing Peoples R China;

    Southeast Univ Sch Comp Sci & Engn Nanjing 210096 Peoples R China|Southeast Univ Minist Educ Key Lab Comp Network & Informat Integrat Nanjing Peoples R China;

    Southeast Univ Sch Comp Sci & Engn Nanjing 210096 Peoples R China|Southeast Univ Minist Educ Key Lab Comp Network & Informat Integrat Nanjing Peoples R China;

    Southeast Univ Sch Comp Sci & Engn Nanjing 210096 Peoples R China|Southeast Univ Minist Educ Key Lab Comp Network & Informat Integrat Nanjing Peoples R China|Baidu Inc Business Grp Nat Language Proc Beijing Peoples R China;

    Southeast Univ Sch Comp Sci & Engn Nanjing 210096 Peoples R China|Southeast Univ Minist Educ Key Lab Comp Network & Informat Integrat Nanjing Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Machine learning; multi-label learning; relative labeling-importance; label distribution; regularization;

    机译:机器学习;多标签学习;相对标签 - 重要性;标签分布;正规化;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号