首页> 外文会议>European conference on computer vision >Learning to Rank Using High-Order Information
【24h】

Learning to Rank Using High-Order Information

机译:学习使用高阶信息排名

获取原文

摘要

The problem of ranking a set of visual samples according to their relevance to a query plays an important role in computer vision. The traditional approach for ranking is to train a binary classifier such as a support vector machine (svm). Binary classifiers suffer from two main deficiencies: (ⅰ) they do not optimize a ranking-based loss function, for example, the average precision (ap) loss; and (ⅱ) they cannot incorporate high-order information such as the a priori correlation between the relevance of two visual samples (for example, two persons in the same image tend to perform the same action). We propose two novel learning formulations that allow us to incorporate high-order information for ranking. The first framework, called high-order binary svm (hob-svm), allows for a structured input. The parameters of hob-svm are learned by minimizing a convex upper bound on a surrogate 0-1 loss function. In order to obtain the ranking of the samples that form the structured input, hob-svm sorts the samples according to their max-marginals. The second framework, called high-order average precision svm (hoap-svm), also allows for a structured input and uses the same ranking criterion. However, in contrast to hob-svm, the parameters of hoap-svm are learned by minimizing a difference-of-convex upper bound on the ap loss. Using a standard, publicly available dataset for the challenging problem of action classification, we show that both hob-svm and hoap-svm outperform the baselines that ignore high-order information.
机译:根据其与查询的相关性排序一组视觉样本的问题在计算机视觉中起着重要作用。排名的传统方法是培训二进制分类器,例如支持向量机(SVM)。二元分类器患有两个主要缺陷:(◆)它们不优化基于排名的损耗功能,例如平均精度(AP)损耗; (Ⅱ)它们不能包含高阶信息,例如两个视觉样本的相关性之间的先验相关性(例如,相同图像中的两个人倾向于执行相同的动作)。我们提出了两种新颖的学习配方,使我们能够纳入高阶信息进行排名。第一个框架,称为高阶二进制SVM(滚刀SVM)允许结构化输入。通过最小化代理0-1损耗函数的凸上限来学习HOB-SVM的参数。为了获得形成结构输入的样本的排名,滚刀SVM根据其最大边缘的标准对样品进行排序。第二框架称为高阶平均精度SVM(HOAP-SVM),也允许结构化输入并使用相同的排名标准。然而,与滚刀SVM相比,通过最小化AP损耗的凸起上限来学习HOAP-SVM的参数。使用标准的公共可用数据集进行挑战的行动分类问题,我们展示了HOB-SVM和HOAP-SVM才能忽略忽略高阶信息的基线。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号