...
首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Greedy Methods, Randomization Approaches, and Multiarm Bandit Algorithms for Efficient Sparsity-Constrained Optimization
【24h】

Greedy Methods, Randomization Approaches, and Multiarm Bandit Algorithms for Efficient Sparsity-Constrained Optimization

机译:高效稀疏约束优化的贪婪方法,随机化方法和多臂Bandit算法

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Several sparsity-constrained algorithms, such as orthogonal matching pursuit (OMP) or the Frank–Wolfe (FW) algorithm, with sparsity constraints work by iteratively selecting a novel atom to add to the current nonzero set of variables. This selection step is usually performed by computing the gradient and then by looking for the gradient component with maximal absolute entry. This step can be computationally expensive especially for large-scale and high-dimensional data. In this paper, we aim at accelerating these sparsity-constrained optimization algorithms by exploiting the key observation that, for these algorithms to work, one only needs the coordinate of the gradient’s top entry. Hence, we introduce algorithms based on greedy methods and randomization approaches that aim at cheaply estimating the gradient and its top entry. Another of our contribution is to cast the problem of finding the best gradient entry as a best-arm identification in a multiarmed bandit problem. Owing to this novel insight, we are able to provide a bandit-based algorithm that directly estimates the top entry in a very efficient way. Theoretical observations stating that the resulting inexact FW or OMP algorithms act, with high probability, similar to their exact versions are also given. We have carried out several experiments showing that the greedy deterministic and the bandit approaches we propose can achieve an acceleration of an order of magnitude while being as efficient as the exact gradient when used in algorithms, such as OMP, FW, or CoSaMP.
机译:几种具有稀疏性约束的算法(例如正交匹配追踪(OMP)或Frank-Wolfe(FW)算法)具有稀疏性约束,可以通过迭代选择要添加到当前非零变量集的新颖原子来工作。通常通过计算梯度,然后通过查找具有最大绝对输入的梯度分量来执行此选择步骤。该步骤在计算上可能是昂贵的,尤其是对于大规模和高维数据。在本文中,我们旨在通过利用以下关键观察来加速这些稀疏性受限的优化算法:对于这些算法而言,仅需梯度顶部条目的坐标即可起作用。因此,我们介绍了基于贪婪方法和随机方法的算法,这些算法旨在廉价地估计梯度及其顶部入口。我们的另一项贡献是解决了在多臂匪徒问题中找到最佳梯度项作为最佳臂识别的问题。由于有了这一新颖的见解,我们能够提供一种基于强盗的算法,该算法可以以非常有效的方式直接估算出最高排名。理论上的观察表明,所产生的不精确的FW或OMP算法具有很高的概率,类似于它们的确切版本。我们已经进行了一些实验,这些实验表明,我们提出的贪婪确定性和强盗方法可以实现一个数量级的加速,同时与在OMP,FW或CoSaMP等算法中使用的精确梯度一样有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号