首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Joint and Direct Optimization for Dictionary Learning in Convolutional Sparse Representation
【24h】

Joint and Direct Optimization for Dictionary Learning in Convolutional Sparse Representation

机译:卷积稀疏表示中字典学习的联合直接优化

获取原文
获取原文并翻译 | 示例

摘要

Convolutional sparse coding (CSC) is a useful tool in many image and audio applications. Maximizing the performance of CSC requires that the dictionary used to store the features of signals can be learned from real data. The so-called convolutional dictionary learning (CDL) problem is formulated within a nonconvex, nonsmooth optimization framework. Most existing CDL solvers alternately update the coefficients and dictionary in an iterative manner. However, these approaches are prone to running redundant iterations, and their convergence properties are difficult to analyze. Moreover, most of those methods approximate the original nonconvex sparse inducing function using a convex regularizer to promote computational efficiency. This approach to approximation may result in nonsparse representations and, thereby, hinder the performance of the applications. In this paper, we deal with the nonconvex, nonsmooth constraints of the original CDL directly using the modified forward-backward splitting approach, in which the coefficients and dictionary are simultaneously updated in each iteration. We also propose a novel parameter adaption scheme to increase the speed of the algorithm used to obtain a usable dictionary and in so doing prove convergence. We also show that the proposed approach is applicable to parallel processing to reduce the computing time required by the algorithm to achieve convergence. The experimental results demonstrate that our method requires less time than the existing methods to achieve the convergence point while using a smaller final functional value. We also applied the dictionaries learned using the proposed and existing methods to an application involving signal separation. The dictionary learned using the proposed approach provides performance superior to that of comparable methods.
机译:卷积稀疏编码(CSC)是许多图像和音频应用程序中的有用工具。要使CSC的性能最大化,就需要从实际数据中学习用于存储信号特征的字典。所谓的卷积字典学习(CDL)问题是在非凸,非平滑的优化框架中制定的。大多数现有的CDL求解器以迭代方式交替更新系数和字典。但是,这些方法易于运行冗余迭代,并且它们的收敛性很难分析。而且,大多数这些方法使用凸正则化器来近似原始的非凸稀疏诱导函数,以提高计算效率。这种近似方法可能导致表示不稀疏,从而阻碍了应用程序的性能。在本文中,我们使用改进的前向后向拆分方法直接处理原始CDL的非凸,非光滑约束,其中系数和字典在每次迭代中同时更新。我们还提出了一种新颖的参数自适应方案,以提高用于获得可用字典的算法的速度,从而证明收敛性。我们还表明,所提出的方法适用于并行处理,以减少算法实现收敛所需的计算时间。实验结果表明,与现有方法相比,该方法所需的时间更少,并且可以使用较小的最终功能值来达到收敛点。我们还将使用建议的方法和现有方法学到的字典应用于涉及信号分离的应用程序。使用建议的方法学习的词典提供的性能优于同类方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号