...
首页> 外文期刊>ACM transactions on intelligent systems >Directly Optimize Diversity Evaluation Measures: A New Approach to Search Result Diversification
【24h】

Directly Optimize Diversity Evaluation Measures: A New Approach to Search Result Diversification

机译:直接优化多样性评估指标:搜索结果多样化的新方法

获取原文
获取原文并翻译 | 示例
           

摘要

The queries issued to search engines are often ambiguous or multifaceted, which requires search engines to return diverse results that can fulfill as many different information needs as possible; this is called search result diversification. Recently, the relational learning to rank model, which designs a learnable ranking function following the criterion of maximal marginal relevance, has shown effectiveness in search result diversification [Zhu et al. 2014]. The goodness of a diverse ranking model is usually evaluated with diversity evaluation measures such as alpha-NDCG [Clarke et al. 2008], ERR-IA [Chapelle et al. 2009], and D#-NDCG [Sakai and Song 2011]. Ideally the learning algorithm would train a ranking model that could directly optimize the diversity evaluation measures with respect to the training data. Existing relational learning to rank algorithms, however, only train the ranking models by optimizing loss functions that loosely relate to the evaluation measures. To deal with the problem, we propose a general framework for learning relational ranking models via directly optimizing any diversity evaluation measure. In learning, the loss function upper-bounding the basic loss function defined on a diverse ranking measure is minimized. We can derive new diverse ranking algorithms under the framework, and several diverse ranking algorithms are created based on different upper bounds over the basic loss function. We conducted comparisons between the proposed algorithms with conventional diverse ranking methods using the TREC benchmark datasets. Experimental results show that the algorithms derived under the diverse learning to rank framework always significantly outperform the state-of-the-art baselines.
机译:发出给搜索引擎的查询通常是模棱两可或多方面的,这就要求搜索引擎返回可以满足尽可能多的不同信息需求的多样化结果。这称为搜索结果多样化。最近,按照最大边际相关性准则设计可学习的排名函数的关系学习排名模型已经显示出在搜索结果多样化方面的有效性[Zhu等。 2014]。通常使用诸如α-NDCG之类的多样性评估手段来评估多样性排名模型的优势[Clarke等。 2008],ERR-IA [Chapelle等。 2009]和D#-NDCG [Sakai and Song 2011]。理想地,学习算法将训练一个排序模型,该模型可以针对训练数据直接优化多样性评估措施。但是,现有的关系学习排序算法仅通过优化与评估措施松散相关的损失函数来训练排序模型。为了解决这个问题,我们提出了一个通过直接优化任何多样性评估手段来学习关系排名模型的通用框架。在学习中,最小化在各种排序量度上定义的基本损失函数的损失函数。我们可以在该框架下得出新的多样化排序算法,并根据基本损失函数的不同上限创建了几种多样化排序算法。我们使用TREC基准数据集,对提出的算法与传统的多样化排名方法进行了比较。实验结果表明,在不同学习排名框架下得出的算法始终明显优于最新的基准。

著录项

  • 来源
    《ACM transactions on intelligent systems》 |2017年第3期|41.1-41.26|共26页
  • 作者单位

    Chinese Acad Sci, Inst Comp Technol, CAS Key Lab Network Data Sci & Technol, Beijing, Peoples R China|Chinese Acad Sci, Inst Comp Technol, 6 Kexueyuan South Rd, Beijing 100190, Peoples R China;

    Chinese Acad Sci, Inst Comp Technol, CAS Key Lab Network Data Sci & Technol, Beijing, Peoples R China|Chinese Acad Sci, Inst Comp Technol, 6 Kexueyuan South Rd, Beijing 100190, Peoples R China;

    Chinese Acad Sci, Inst Comp Technol, CAS Key Lab Network Data Sci & Technol, Beijing, Peoples R China|Chinese Acad Sci, Inst Comp Technol, 6 Kexueyuan South Rd, Beijing 100190, Peoples R China;

    Chinese Acad Sci, Inst Comp Technol, CAS Key Lab Network Data Sci & Technol, Beijing, Peoples R China|Chinese Acad Sci, Inst Comp Technol, 6 Kexueyuan South Rd, Beijing 100190, Peoples R China;

    Chinese Acad Sci, Inst Comp Technol, CAS Key Lab Network Data Sci & Technol, Beijing, Peoples R China|Chinese Acad Sci, Inst Comp Technol, 6 Kexueyuan South Rd, Beijing 100190, Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Search result diversification; relational learning to rank; diversity evaluation measure;

    机译:搜索结果多样化;关系学习排名;多样性评估措施;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号