...
首页> 外文期刊>IEEE Transactions on Knowledge and Data Engineering >Modeling the Parameter Interactions in Ranking SVM with Low-Rank Approximation
【24h】

Modeling the Parameter Interactions in Ranking SVM with Low-Rank Approximation

机译:使用低秩近似进行排序SVM中的参数交互

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Ranking SVM, which formalizes the problem of learning a ranking model as that of learning a binary SVM on preference pairs of documents, is a state-of-the-art ranking model in information retrieval. The dual form solution of a linear Ranking SVM model can be written as a linear combination of the preference pairs, i.e., w = Sigma((i,j)) alpha(ij) (x(i) - x(j)), where alpha(ij) denotes the Lagrange parameters associated with each preference pair (i, j). It is observed that there exist obvious interactions among the document pairs because two preference pairs could share a same document as their items, e.g., preference pairs (d(1), d(2)) and (d(1), d(3)) share the document d(1). Thus it is natural to ask if there also exist interactions over the model parameters alpha(ij), which may be leveraged to construct better ranking models. This paper aims to answer the question. We empirically found that there exists a low-rank structure over the rearranged Ranking SVM model parameters alpha(ij), which indicates that the interactions do exist. Based on the discovery, we made modifications on the original Ranking SVM model by explicitly applying low-rank constraints to the Lagrange parameters, achieving two novel algorithms called Factorized Ranking SVM and Regularized Ranking SVM, respectively. Specifically, in Factorized Ranking SVM each parameter alpha(ij) is decomposed as a product of two low-dimensional vectors, i.e., alpha(ij) = < v(i), v(j)>, where vectors v(i) and v(j) correspond to document i and j, respectively; In Regularized Ranking SVM, a nuclear norm is applied to the rearranged parameters matrix for controlling its rank. Experimental results on three LETOR datasets show that both of the proposed methods can outperform state-of-the-art learning to rank models including the conventional Ranking SVM.
机译:排名SVM,它将学习排名模型的问题正式确定为在偏好对文件上学习二进制SVM的问题,是信息检索中的最先进的排名模型。线性排序SVM模型的双形式解决方案可以被写为偏好对的线性组合,即W = Sigma((i,j))alpha(ij)(x(i) - x(j)),其中alpha(ij)表示与每个偏好对关联的拉格朗日参数(i,j)。观察到文献对之间存在明显的相互作用,因为两个偏好对可以与它们的项目共享相同的文档,例如,偏好对(D(1),D(2))和(D(1),D(3 ))共享文件D(1)。因此,询问是否还存在对模型参数α(IJ)的相互作用的自然,这可以利用以构造更好的排名模型。本文旨在回答这个问题。我们经验发现,在重新排列的排名SVM模型参数Alpha(IJ)上存在低秩结构,这表明确实存在交互。基于该发现,我们通过将低秩约为Lagrange参数应用于Lagrange参数,实现了两个新颖的算法,分别在Lagrange参数上进行了修改,分别实现了两个新颖的算法和正规化排名SVM。具体地,在分解排名SVM中,每个参数alpha(IJ)被分解为两个低维矢量的乘积,即,α(IJ)= ,其中V(i)和v(j)分别对应于文档I和j;在规则化排名SVM中,将核规范应用于重新排列的参数矩阵以控制其等级。三个Letor数据集上的实验结果表明,这两个建议的方法都可以倾向于最先进的学习,以对包括传统排名SVM的模型进行排名。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号