首页> 外文会议>International Conference on Signal Processing and Communications >A Singular Value Relaxation Technique for Learning Sparsifying Transforms
【24h】

A Singular Value Relaxation Technique for Learning Sparsifying Transforms

机译:一种学习稀疏变换的奇异值弛豫技术

获取原文

摘要

We address the problem of learning data-adaptive square sparsifying transforms subject to a condition number constraint and adopt an alternating minimization (alt. min.) strategy to solve it. We propose a quadratic program based approach in every iteration of alt. min. to update the singular values of the transform so that the condition number constraint is satisfied. The set of updated singular values, as it turns out after applying the Karush-Kuhn-Tucker conditions for optimality, can be expressed as an affine transformation applied to the current set of singular values. We refer to the resulting technique as singular value relaxation (SVR). The SVR-based transform learning algorithm is employed in signal sparsification and denoising applications. Performance evaluations of SVR show that it is about three times faster than K-SVD for denoising images of size 512×512 and results in a PSNR gain of about 0.5 to 1 dB over K-SVD for synthesized signals, and about 0.2 to 0.3 dB for natural images. The PSNR gains of SVR are shown to be comparable with a recently proposed transform learning algorithm that employs a closed-form transform-update rule.
机译:我们解决了通过条件号限制的学习数据 - 自适应方形稀疏的问题,并采用交替的最小化(ALT.IN.IL.MIN)策略来解决它。我们在每次迭代中提出了一种基于二次计划的方法。闵。要更新变换的奇异值,以便满足条件号约束。当施加karush-kuhn-tucker条件以获得最佳状态后,可以表示为施加到当前奇异值集的仿射变换,所以更新的奇异值。我们将所得技术称为单数值松弛(SVR)。基于SVR的转换学习算法用于信号稀疏和去噪应用。 SVR的性能评估表明,对于尺寸为512×512的图像,它比K-SVD速度快三倍,并且在合成信号的K-SVD上产生约0.5至1dB的PSNR增益,约为0.2至0.3dB用于自然图像。 SVR的PSNR收益显示与最近建议的转换学习算法相当,该算法采用闭合变换更新规则。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号