首页> 外文期刊>SIGKDD explorations >Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System
【24h】

Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System

机译:排名蒸馏:学习具有高性能的紧凑型排名模型,适用于推荐系统

获取原文
获取原文并翻译 | 示例
       

摘要

We propose a novel way to train ranking models, such as recommender systems, that are both effective and efficient. Knowledge distillation (KD) was shown to be successful in image recognition to achieve both effectiveness and efficiency. We propose a KD technique for learning to rank problems, called ranking distillation (RD). Specifically, we train a smaller student model to learn to rank documents/ items from both the training data and the supervision of a larger teacher model. The student model achieves a similar ranking performance to that of the large teacher model, but its smaller model size makes the online inference more efficient. RD is flexible because it is orthogonal to the choices of ranking models for the teacher and student. We address the challenges of RD for ranking problems. The experiments on public data sets and state-of-the-art recommendation models showed that RD achieves its design purposes: the student model learnt with RD has a model size less than half of the teacher model while achieving a ranking performance similar to the teacher model and much better than the student model learnt without RD.
机译:我们提出了一种新颖的方式来培训排名模式,例如推荐系统,这既有效又高效。知识蒸馏(KD)被证明是成功的图像识别,以实现有效性和效率。我们提出了一种KD技术,用于学习排名问题,称为排名蒸馏(RD)。具体而言,我们训练一个较小的学生模型来学习从培训数据和更大的教师模型的监督中排名文件/项目。学生模型实现了与大师模型相似的排名性能,但其较小的模型规模使在线推断更高效。 RD是灵活的,因为它与教师和学生的排名模式的选择正交。我们解决了RD对排名问题的挑战。公共数据集的实验和最先进的推荐模型显示RD实现了其设计目的:与RD学习的学生模型具有不到教师模型的一半的型号大小,同时实现与老师类似的排名性能模型,比没有RD的学生模型更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号