首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Dictionary Learning Based on Sparse Distribution Tomography
【24h】

Dictionary Learning Based on Sparse Distribution Tomography

机译:基于稀疏分布层析成像的字典学习

获取原文
           

摘要

We propose a new statistical dictionary learning algorithm for sparse signals that is based on an $lpha$-stable innovation model. The parameters of the underlying model—that is, the atoms of the dictionary, the sparsity index $lpha$ and the dispersion of the transform-domain coefficients—are recovered using a new type of probability distribution tomography. Specifically, we drive our estimator with a series of random projections of the data, which results in an efficient algorithm. Moreover, since the projections are achieved using linear combinations, we can invoke the generalized central limit theorem to justify the use of our method for sparse signals that are not necessarily $lpha$-stable. We evaluate our algorithm by performing two types of experiments: image in-painting and image denoising. In both cases, we find that our approach is competitive with state-of-the-art dictionary learning techniques. Beyond the algorithm itself, two aspects of this study are interesting in their own right. The first is our statistical formulation of the problem, which unifies the topics of dictionary learning and independent component analysis. The second is a generalization of a classical theorem about isometries of $ell_p$-norms that constitutes the foundation of our approach.
机译:我们提出了一种基于$ alpha $-稳定创新模型的稀疏信号统计字典学习算法。使用一种新型的概率分布层析成像技术可以恢复基础模型的参数,即字典的原子,稀疏索引$ alpha $和变换域系数的离散度。具体来说,我们使用一系列数据的随机投影来驱动我们的估算器,从而得出有效的算法。此外,由于投影是使用线性组合来实现的,因此我们可以调用广义中心极限定理来证明我们的方法对于不一定稳定的 alpha $信号的使用是合理的。我们通过执行两种类型的实验来评估算法:图像修复和图像去噪。在这两种情况下,我们都发现我们的方法与最新的词典学习技术相比具有竞争力。除了算法本身之外,这项研究的两个方面也很有趣。首先是我们对问题的统计表述,它统一了字典学习和独立成分分析的主题。第二个是关于$ ell_p $-范数的等式的经典定理的一般化,它构成了我们方法的基础。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号