首页> 美国政府科技报告 >Sparse Modeling with Universal Priors and Learned Incoherent Dictionaries(PREPRINT)
【24h】

Sparse Modeling with Universal Priors and Learned Incoherent Dictionaries(PREPRINT)

机译:通用priors和学习非相干词典的稀疏建模(pREpRINT)

获取原文

摘要

Sparse data models have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. The learning of sparse models has been mostly concerned with adapting the dictionary to tasks such as classification and reconstruction, optimizing extrinsic properties of the trained dictionaries. In this work, we first propose a learning method aimed at enhancing both extrinsic and intrinsic properties of the dictionaries, such as the mutual and cumulative coherence and the Gram matrix norm, characteristics known to improve the efficiency and performance of sparse coding algorithms. We then use tools from information theory to propose a sparsity regularization term which has several desirable theoretical and practical advantages over the more standard '0 or '1 ones. These new sparse modeling components lead to improved coding performance and accuracy in reconstruction tasks.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号