首页> 外文期刊>ACM transactions on multimedia computing communications and applications >Exploration in Interactive Personalized Music Recommendation: A Reinforcement Learning Approach
【24h】

Exploration in Interactive Personalized Music Recommendation: A Reinforcement Learning Approach

机译:互动个性化音乐推荐中的探索:强化学习方法

获取原文
获取原文并翻译 | 示例

摘要

Current music recommender systems typically act in a greedy manner by recommending songs with the highest user ratings. Greedy recommendation, however, is suboptimal over the long term: it does not actively gather information on user preferences and fails to recommend novel songs that are potentially interesting. A successful recommender system must balance the needs to explore user preferences and to exploit this information for recommendation. This article presents a new approach to music recommendation by formulating this exploration-exploitation trade-off as a reinforcement learning task. To learn user preferences, it uses a Bayesian model that accounts for both audio content and the novelty of recommendations. A piecewise-linear approximation to the model and a variational inference algorithm help to speed up Bayesian inference. One additional benefit of our approach is a single unified model for both music recommendation and playlist generation. We demonstrate the strong potential of the proposed approach with simulation results and a user study.
机译:当前的音乐推荐器系统通常通过推荐具有最高用户评级的歌曲以贪婪的方式起作用。但是,从长远来看,贪婪的推荐并不理想:它不会主动收集有关用户偏好的信息,也无法推荐可能有趣的新颖歌曲。成功的推荐系统必须平衡探索用户偏好和利用此信息进行推荐的需求。本文通过将探索与开发之间的权衡作为强化学习任务,提出了一种音乐推荐的新方法。为了了解用户的喜好,它使用贝叶斯模型,该模型考虑了音频内容和推荐的新颖性。模型的分段线性近似和变分推理算法有助于加快贝叶斯推理。我们的方法的另一个好处是用于音乐推荐和播放列表生成的单一统一模型。我们通过仿真结果和用户研究证明了该方法的强大潜力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号