【24h】

Sparse Estimation Using General Likelihoods and Non-Factorial Priors

机译:使用一般可能性和非先验先验的稀疏估计

获取原文

摘要

Finding maximally sparse representations from overcomplete feature dictionaries frequently involves minimizing a cost function composed of a likelihood (or data fit) term and a prior (or penalty function) that favors sparsity. While typically the prior is factorial, here we examine non-factorial alternatives that have a number of desirable properties relevant to sparse estimation and are easily implemented using an efficient and globally-convergent, reweighted ?_1-norm minimization procedure. The first method under consideration arises from the sparse Bayesian learning (SBL) framework. Although based on a highly non-convex underlying cost function, in the context of canonical sparse estimation problems, we prove uniform superiority of this method over the Lasso in that, (i) it can never do worse, and (ii) for any dictionary and sparsity profile, there will always exist cases where it does better. These results challenge the prevailing reliance on strictly convex penalty functions for finding sparse solutions. We then derive a new non-factorial variant with similar properties that exhibits further performance improvements in some empirical tests. For both of these methods, as well as traditional factorial analogs, we demonstrate the effectiveness of reweighted ?_1-norm algorithms in handling more general sparse estimation problems involving classification, group feature selection, and non-negativity constraints. As a byproduct of this development, a rigorous reformulation of sparse Bayesian classification (e.g., the relevance vector machine) is derived that, unlike the original, involves no approximation steps and descends a well-defined objective function.
机译:从超完备的特征字典中找到最大程度的稀疏表示通常涉及最小化由似然(或数据拟合)项和有利于稀疏性的先验(或惩罚函数)组成的成本函数。虽然通常先验是阶乘,但这里我们研究非阶乘,它具有许多与稀疏估计有关的理想属性,并且可以使用有效且全局收敛的重新加权的α_1范数最小化过程轻松实现。正在考虑的第一种方法来自稀疏的贝叶斯学习(SBL)框架。尽管基于高度非凸的基本成本函数,但是在规范稀疏估计问题的背景下,我们证明了该方法相对于套索的统一优势,因为(i)它永远不会做得更好,并且(ii)对于任何词典和稀疏资料,总会存在效果更好的情况。这些结果挑战了普遍依赖严格凸罚函数来寻找稀疏解的可能性。然后,我们得出一个具有类似属性的新的非阶乘变体,该变体在某些经验测试中显示出进一步的性能改进。对于这两种方法以及传统的阶乘类似物,我们都证明了重新加权的α_1范数算法在处理涉及分类,组特征选择和非负约束的更普遍的稀疏估计问题方面的有效性。作为这种发展的副产品,派生了严格的稀疏贝叶斯分类公式(例如,相关性矢量机),该公式与原始方法不同,它不涉及任何近似步骤,并且下降了定义明确的目标函数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号