首页> 外文期刊>IEEE Transactions on Knowledge and Data Engineering >Affinity Regularized Non-Negative Matrix Factorization for Lifelong Topic Modeling
【24h】

Affinity Regularized Non-Negative Matrix Factorization for Lifelong Topic Modeling

机译:终身主题建模的亲和力正则非负矩阵分解

获取原文
获取原文并翻译 | 示例

摘要

Lifelong topic model (LTM), an emerging paradigm for never-ending topic learning, aims to yield higher-quality topics as time passes through knowledge accumulated from the past yet learned for the future. In this paper, we propose a novel lifelong topic model based on non-negative matrix factorization (NMF), called Affinity Regularized NMF for LTM (NMF-LTM), which to our best knowledge is distinctive from the popular LDA-based LTMs. NMF-LTM achieves lifelong learning by introducing word-word graph Laplacian as semantic affinity regularization. Other priors such as sparsity, diversity, and between-class affinity are incorporated as well for better performance, and a theoretical guarantee is provided for the algorithmic convergence to a local minimum. Extensive experiments on various public corpora demonstrate the effectiveness of NMF-LTM, particularly its human-like behaviors in two carefully designed learning tasks and the ability in topic modeling of big data. A further exploration of semantic relatedness in knowledge graphs and a case study on a large-scale real-world corpus exhibit the strength of NMF-LTM in discovering high-quality topics in an efficient and robust way.
机译:LifeLong主题模型(LTM)是永无止境主题学习的新兴范式,旨在通过从过去累计的知识通过时间来产生更高质量的主题。在本文中,我们提出了一种基于非负矩阵分解(NMF)的新型终身液晶主题模型,称为LTM(NMF-LTM)的亲和正则化NMF,这对于我们最佳知识与基于流行的LDA的LTMS具有独特性。 NMF-LTM通过将字词图拉普拉斯称为语义亲和力正则化来实现终身学习。还包含稀疏性,多样性和级别之间的其他前脚以及级别的亲和力,以实现更好的性能,并且提供了算法收敛到局部最小值的理论保证。各种公共集团的广泛实验证明了NMF-LTM的有效性,特别是其在两个精心设计的学习任务中的人类类似的行为以及大数据的主题建模的能力。进一步探索知识图中的语义相关性以及大规模现实世界语料库的案例研究表明了NMF-LTM以高效且强大的方式发现高质量主题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号