首页> 外文会议>2015 IEEE 7th International Conference on Awareness Science and Technology >Dictionary learning with ℓ1/2 regularizer for sparsity based on proximal operator
【24h】

Dictionary learning with ℓ1/2 regularizer for sparsity based on proximal operator

机译:基于近端算子的with 1/2 正则化器字典学习

获取原文
获取原文并翻译 | 示例

摘要

In this study, we propose a fast and efficient algorithm for learning overcomplete dictionary for sparse representation of signals using the nonconvex ℓ regularizer for sparsity. The special importance of ℓ regularizer has been recognized in recent studies on sparse modeling. The ℓ-norm, however, leads to a nonconvex and nonsmooth optimization problem that is difficult to solve efficiently. In this paper, we propose a method based on a decomposition scheme and alternating optimization that can turn the whole problem into a set of subminimizations of univariate functions, each of which is dependent on only one dictionary atom or the coefficient vector. Although the subproblem with respect to the coefficient vector is still nonsmooth and nonconvex due to the ℓ regularizer, remarkably, it becomes much simpler and it has a closed-form solution by introducing a technique that is proximal operator. The main advantages of the proposed algorithm is that, as suggested by the simulation study, it is faster and more efficient than state-of-the-art algorithms with different sparsity constraints.
机译:在这项研究中,我们提出了一种快速有效的算法,用于使用稀疏的非凸ℓ正则化器来学习信号的稀疏表示的过完整字典。 ℓ正则化器的特殊重要性在最近的稀疏建模研究中得到了认可。但是,范数会导致难以有效解决的非凸且不平滑的优化问题。在本文中,我们提出了一种基于分解方案和交替优化的方法,该方法可以将整个问题转变为单变量函数的子最小化集,每个子​​变量仅依赖于一个字典原子或系数向量。尽管由于ℓ正则化函数,关于系数向量的子问题仍然不平滑和不凸,但值得注意的是,它变得更加简单,并且通过引入一种近端算子来提供封闭形式的解决方案。仿真研究表明,该算法的主要优点是,它比具有不同稀疏性约束的最新算法更快,更高效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号