【24h】

Sparse choice models

机译:稀疏选择模型

获取原文

摘要

Choice models, which capture popular preferences over objects of interest, play a key role in making decisions whose eventual outcome is impacted by human choice behavior. In most scenarios, the choice model, which can effectively be viewed as a distribution over permutations, must be learned from observed data. The observed data, in turn, may frequently be viewed as (partial, noisy) information about marginals of this distribution over permutations. As such, the search for an appropriate choice model boils down to learning a distribution over permutations that is (near-)consistent with observed information about this distribution. In this work, we pursue a non-parametric approach which seeks to learn a choice model (i.e. a distribution over permutations) with sparsest possible support, and consistent with observed data. We assume that the data observed consists of noisy information pertaining to the marginals of the choice model we seek to learn. We establish that any choice model admits a ‘very’ sparse approximation in the sense that there exists a choice model whose support is small relative to the dimension of the observed data and whose marginals approximately agree with the observed marginal information. We further show that under, what we dub, ‘signature’ conditions, such a sparse approximation can be found in a computationally efficiently fashion relative to a brute force approach. An empirical study using the American Psychological Association election data-set suggests that our approach manages to unearth useful structural properties of the underlying choice model using the sparse approximation found. Our results further suggest that the signature condition is a potential alternative to the recently popularized Restricted Null Space condition for efficient recovery of sparse models.
机译:选择模型捕获了人们对感兴趣对象的偏好,在做出最终结果受人类选择行为影响的决策中发挥着关键作用。在大多数情况下,必须从观察到的数据中学习选择模型,该模型可以有效地视为排列的分布。反过来,观察到的数据可能经常被视为有关此分布在置换范围上的边际的(部分,嘈杂的)信息。因此,对于合适的选择模型搜索归结为学习过排列分布是(近)有关这一分布观测信息一致。在这项工作中,我们追求一种非参数方法,该方法试图学习一种选择模型(即排列分布),并尽可能少地提供支持,并与观察到的数据保持一致。我们假设观察到的数据包含与我们要学习的选择模型的边际有关的嘈杂信息。我们建立了任何选择模型都接受“非常”稀疏近似的意义,因为存在一个选择模型,该模型的支持度相对于所观察数据的维数较小,并且其边际与所观察到的边际信息大致相符。我们进一步证明,在我们所谓的“签名”条件下,相对于蛮力方法,这种稀疏近似可以以计算有效的方式找到。使用美国心理学会选举数据集进行的实证研究表明,我们的方法设法利用发现的稀疏近似来挖掘潜在选择模型的有用结构特性。我们的结果进一步表明,对于有效恢复稀疏模型,签名条件是对最近流行的受限零空间条件的潜在替代。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号