首页> 外文OA文献 >Supervised Learning With Implicit Preferences And Constraints
【2h】

Supervised Learning With Implicit Preferences And Constraints

机译:内隐偏好和约束的监督学习

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In classical Combinatorial Optimization, a well-defined objective function is to be optimized satisfying a set of deterministic constraints. Approximation algorithms and heuristic methods are often applied to the problems proven to be difficult, or NP-Complete and beyond. However, in many real-world problem domains the objective (or utility or preference) function an individual is trying to optimize is not explicitly known. Furthermore, preferences can take many different forms and it is difficult to pre-define the correct format of the true utility function one is optimizing. To circumvent such limitations, we model these problems as machine learning tasks with implicit preferences that can be inferred from observations of the choices the individual made in the past. This approach contrasts with the traditional approach that learns the parameters of an utility or preference function, whose functional form is explicitly defined a priori. We study a set of different learning problem domains in which the preferences, or utility functions, are not explicitly defined. These include structural learning, document citation ranking and resource capacity constraint satisfaction. Our goal is to accurately make predictions for future instances, assuming the same underlying preferences as those expressed in past observations, without explicitly modeling them. The new algorithms and techniques we propose to optimize our learning formulations are shown to be very effective. For situations where both the prediction accuracy and the explicit forms of preferences are important, we provide an Induc- tive Logic Programming (ILP) based algorithm to extract the preferences from a "black-box" machine learning model for intuitive human interpretation.
机译:在经典组合优化中,要优化定义明确的目标函数,以满足一组确定性约束。逼近算法和启发式方法通常用于证明难以解决的问题或NP-Complete甚至更广泛的问题。但是,在许多现实世界中的问题域中,人们尚未明确了解个人正在尝试优化的目标(或效用或偏好)功能。此外,首选项可以采用许多不同的形式,并且很难预定义正在优化的实际效用函数的正确格式。为了规避此类限制,我们将这些问题建模为具有隐式偏好的机器学习任务,这可以从对个人过去所做选择的观察中得出。该方法与学习效用或偏好函数的参数的传统方法形成对比,传统方法的效用形式明确定义为先验。我们研究了一组不同的学习问题域,在这些域中没有明确定义首选项或效用函数。这些包括结构学习,文档引用排名和资源容量约束满足。我们的目标是,在不明确建模的前提下,假设与过去观察中所表达的偏好相同,对未来的情况进行准确的预测。我们建议的用于优化学习方式的新算法和技术非常有效。对于预测准确性和偏好的显式形式均很重要的情况,我们提供了一种基于归纳逻辑编程(ILP)的算法,可从“黑匣子”机器学习模型中提取偏好,以进行直观的人工解释。

著录项

  • 作者

    Guo Yunsong;

  • 作者单位
  • 年度 2010
  • 总页数
  • 原文格式 PDF
  • 正文语种 en_US
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号