首页> 外文期刊>Neurocomputing >Sparse dictionary learning by block proximal gradient with global convergence
【24h】

Sparse dictionary learning by block proximal gradient with global convergence

机译:通过全局收敛的块近侧梯度进行稀疏字典学习

获取原文
获取原文并翻译 | 示例
           

摘要

The present paper focuses on dictionary learning in the double sparsity model for sparse representation (or approximation). Due to the highly non-convexity and discontinuity of l(0) pseudo-norm, the proofs of convergence for sparse K singular value decomposition (Sparse-KSVD) and online sparse dictionary learning (OSDL) are challenging. To theoretically analyze the convergence, we relax the l(0) pseudo-norm to the convex entry-wise l(1) norm, and reformulate the constrained optimization problem as an unconstrained one leveraging the double-l(1)-regularizer. Therefore, the problem becomes bi-convex. To train a sparse dictionary, an algorithm based on the block proximal gradient (BPG) framework with global convergence is derived. Several experiments are presented to show that our algorithm outperforms Sparse-KSVD and OSDL in some cases. (C) 2019 Elsevier B.V. All rights reserved.
机译:本文重点研究稀疏表示(或近似)的双稀疏模型中的字典学习。由于l(0)伪范数的高度非凸性和不连续性,稀疏K奇异值分解(Sparse-KSVD)和在线稀疏字典学习(OSDL)的收敛性证明具有挑战性。为了从理论上分析收敛性,我们将l(0)伪范数放宽为凸的入口l(1)范数,并将约束优化问题重新构造为利用double-l(1)正规化器的无约束范数。因此,问题变为双凸。为了训练稀疏字典,推导了一种基于具有全局收敛性的块近端梯度(BPG)框架的算法。一些实验表明,在某些情况下,我们的算法优于Sparse-KSVD和OSDL。 (C)2019 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号