首页> 外文会议>International Conference on Signal Processing and Communication Systems >A Singular Value Relaxation Technique for Learning Sparsifying Transforms
【24h】

A Singular Value Relaxation Technique for Learning Sparsifying Transforms

机译:一种学习稀疏变换的奇异值松弛技术

获取原文

摘要

We address the problem of learning data-adaptive square sparsifying transforms subject to a condition number constraint and adopt an alternating minimization (alt. min.) strategy to solve it. We propose a quadratic program based approach in every iteration of alt. min. to update the singular values of the transform so that the condition number constraint is satisfied. The set of updated singular values, as it turns out after applying the Karush-Kuhn-Tucker conditions for optimality, can be expressed as an affine transformation applied to the current set of singular values. We refer to the resulting technique as singular value relaxation (SVR). The SVR-based transform learning algorithm is employed in signal sparsification and denoising applications. Performance evaluations of SVR show that it is about three times faster than K-SVD for denoising images of size 512×512 and results in a PSNR gain of about 0.5 to 1 dB over K-SVD for synthesized signals, and about 0.2 to 0.3 dB for natural images. The PSNR gains of SVR are shown to be comparable with a recently proposed transform learning algorithm that employs a closed-form transform-update rule.
机译:我们解决了学习受条件数约束的数据自适应平方稀疏变换的问题,并采用交替最小化(最小最小值)策略来解决它。我们在alt的每次迭代中都提出了一种基于二次程序的方法。分钟更新变换的奇异值,以便满足条件数约束。在应用Karush-Kuhn-Tucker条件以获得最优性之后发现,更新后的奇异值集可以表示为应用于当前奇异值集的仿射变换。我们将所得技术称为奇异值松弛(SVR)。基于SVR的变换学习算法用于信号稀疏化和降噪应用中。对SVR的性能评估表明,对512×512尺寸的图像进行降噪时,它的速度比K-SVD快三倍,对于合成信号,其PSNR增益比K-SVD约高0.5至1 dB,而对合成信号则约为0.2至0.3 dB用于自然图像。 SVR的PSNR增益与最近提出的采用封闭形式的变换更新规则的变换学习算法具有可比性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号