首页> 外文期刊>IEEE Transactions on Image Processing >Convolutional Analysis Operator Learning: Acceleration and Convergence
【24h】

Convolutional Analysis Operator Learning: Acceleration and Convergence

机译:卷积分析运营商学习:加速和收敛

获取原文
获取原文并翻译 | 示例
       

摘要

Convolutional operator learning is gaining attention in many signal processing and computer vision applications. Learning kernels has mostly relied on so-called patch-domain approaches that extract and store many overlapping patches across training signals. Due to memory demands, patch-domain methods have limitations when learning kernels from large datasets - particularly with multi-layered structures, e.g., convolutional neural networks - or when applying the learned kernels to high-dimensional signal recovery problems. The so-called convolution approach does not store many overlapping patches, and thus overcomes the memory problems particularly with careful algorithmic designs; it has been studied within the "synthesis" signal model, e.g., convolutional dictionary learning. This paper proposes a new convolutional analysis operator learning (CAOL) framework that learns an analysis sparsifying regularizer with the convolution perspective, and develops a new convergent Block Proximal Extrapolated Gradient method using a Majorizer (BPEG-M) to solve the corresponding block multi-nonconvex problems. To learn diverse filters within the CAOL framework, this paper introduces an orthogonality constraint that enforces a tight-frame filter condition, and a regularizer that promotes diversity between filters. Numerical experiments show that, with sharp majorizers, BPEG-M significantly accelerates the CAOL convergence rate compared to the state-of-the-art block proximal gradient (BPG) method. Numerical experiments for sparse-view computational tomography show that a convolutional sparsifying regularizer learned via CAOL significantly improves reconstruction quality compared to a conventional edge-preserving regularizer. Using more and wider kernels in a learned regularizer better preserves edges in reconstructed images.
机译:卷积运营商学习在许多信号处理和计算机视觉应用中都会受到关注。学习内核主要依赖于所谓的补丁域方法,该方法在跨越训练信号中提取和存储许多重叠补丁的方法。由于内存要求,补丁域方法在从大型数据集中学习内核时具有限制 - 特别是在多层结构,例如卷积神经网络中 - 或者在将学习内核应用于高维信号恢复问题时。所谓的卷积方法不存储许多重叠补丁,因此尤其克服了内存问题,特别是以仔细的算法设计;已经在“合成”信号模型中研究过,例如,卷积字典学习。本文提出了一种新的卷积分析运营商学习(CAOL)框架,其使用卷积的角度来学习分析扫描规范器,并使用大型器(BPEG-M)开发一种新的会聚块近端推断梯度方法来解决相应的块多个非凸块问题。要在CAOL框架内学习不同的过滤器,本文介绍了一个正交的约束,该约束强制执行紧密帧滤波器条件,以及促进过滤器之间的分集的规范器。数值实验表明,与锋利的大致剂,与最先进的块近梯度(BPG)方法相比,BPEG-M显着加速CAOL收敛速度。稀疏视图计算断层摄影的数值实验表明,与传统的边缘保留规则器相比,通过CAOL学习的卷积稀疏规律器显着提高了重建质量。在学习的符号器中使用更多和更宽的内核,更好地保留重建图像中的边缘。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号