首页> 外文期刊>ACM transactions on intelligent systems >Active Learning Strategies for Rating Elicitation in Collaborative Filtering: A System-Wide Perspective
【24h】

Active Learning Strategies for Rating Elicitation in Collaborative Filtering: A System-Wide Perspective

机译:协作过滤中的评级启发的主动学习策略:全系统视角

获取原文
获取原文并翻译 | 示例

摘要

The accuracy of collaborative-filtering recommender systems largely depends on three factors: the quality of the rating prediction algorithm, and the quantity and quality of available ratings. While research in the field of recommender systems often concentrates on improving prediction algorithms, even the best algorithms will fail if they are fed poor-quality data during training, that is, garbage in, garbage out. Active learning aims to remedy this problem by focusing on obtaining better-quality data that more aptly reflects a user's preferences. However, traditional evaluation of active learning strategies has two major flaws, which have significant negative ramifications on accurately evaluating the system's performance (prediction error, precision, and quantity of elicited ratings). (1) Performance has been evaluated for each user independently (ignoring system-wide improvements). (2) Active learning strategies have been evaluated in isolation from unsolicited user ratings (natural acquisition). In this article we show that an elicited rating has effects across the system, so a typical user-centric evaluation which ignores any changes of rating prediction of other users also ignores these cumulative effects, which may be more influential on the performance of the system as a whole (system centric). We propose a new evaluation methodology and use it to evaluate some novel and state-of-the-art rating elicitation strategies. We found that the system-wide effectiveness of a rating elicitation strategy depends on the stage of the rating elicitation process, and on the evaluation measures (MAE, NDCG, and Precision). In particular, we show that using some common user-centric strategies may actually degrade the overall performance of a system. Finally, we show that the performance of many common active learning strategies changes significantly when evaluated concurrently with the natural acquisition of ratings in recommender systems.
机译:协作过滤推荐系统的准确性主要取决于三个因素:收视率预测算法的质量以及可用收视率的数量和质量。尽管推荐系统领域的研究通常集中在改进预测算法上,但是即使最好的算法在训练过程中如果馈送质量差的数据(即垃圾输入,垃圾输出)也会失败。主动学习旨在通过专注于获取更恰当地反映用户偏好的高质量数据来解决此问题。但是,传统的主动学习策略评估存在两个主要缺陷,这些缺陷在准确评估系统性能(预测误差,精度和得出的评分数量)方面具有明显的负面影响。 (1)已针对每个用户独立评估了性能(忽略了系统范围内的改进)。 (2)主动学习策略已与未经请求的用户评分(自然习得)隔离开来进行了评估。在本文中,我们表明,引诱的评分对整个系统都有影响,因此,典型的以用户为中心的评估会忽略其他用户的评分预测的任何更改,也会忽略这些累积的影响,这可能会对系统的性能产生更大的影响整体(以系统为中心)。我们提出了一种新的评估方法,并将其用于评估一些新颖和最新的评级启发策略。我们发现,评级启发策略在系统范围内的有效性取决于评级启发过程的阶段以及评估措施(MAE,NDCG和Precision)。特别是,我们表明,使用一些常见的以用户为中心的策略实际上可能会降低系统的整体性能。最后,我们表明,与推荐系统中的自然获得评分同时进行评估时,许多常见的主动学习策略的性能会发生显着变化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号