首页> 外文会议>International semantic web conference >How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs
【24h】

How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs

机译:如何通过带有知识图的因子分解机使潜在因子可解释

获取原文

摘要

Model-based approaches to recommendation can recommend items with a very high level of accuracy. Unfortunately, even when the model embeds content-based information, if we move to a latent space we miss references to the actual semantics of recommended items. Consequently, this makes non-trivial the interpretation of a recommendation process. In this paper, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model. With our model, semantic features are injected into the learning process to retain the original informativeness of the items available in the dataset. The accuracy and effectiveness of the trained model have been tested using two well-known recommender systems datasets. By relying on the information encoded in the original knowledge graph, we have also evaluated the semantic accuracy and robustness for the knowledge-aware interpretability of the final model.
机译:基于模型的推荐方法可以以很高的准确性推荐项目。不幸的是,即使模型嵌入了基于内容的信息,但是如果我们移至潜在空间,也会错过对推荐项目实际语义的引用。因此,这使得推荐过程的解释变得不平凡。在本文中,我们展示了如何通过使用来自知识图的语义特征来初始化因数分解机中的潜在因素,以训练可解释的模型。使用我们的模型,语义特征被注入到学习过程中,以保留数据集中可用项目的原始信息性。已使用两个众所周知的推荐系统数据集对经过训练的模型的准确性和有效性进行了测试。通过依赖原始知识图中编码的信息,我们还评估了最终模型的知识感知可解释性的语义准确性和鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号