首页> 外文期刊>IEEE Transactions on Pattern Analysis and Machine Intelligence >On Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks
【24h】

On Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks

机译:在多层基础追求,高效算法和卷积神经网络

获取原文
获取原文并翻译 | 示例

摘要

Parsimonious representations are ubiquitous in modeling and processing information. Motivated by the recent Multi-Layer Convolutional Sparse Coding (ML-CSC) model, we herein generalize the traditional Basis Pursuit problem to a multi-layer setting, introducing similar sparse enforcing penalties at different representation layers in a symbiotic relation between synthesis and analysis sparse priors. We explore different iterative methods to solve this new problem in practice, and we propose a new Multi-Layer Iterative Soft Thresholding Algorithm (ML-ISTA), as well as a fast version (ML-FISTA). We show that these nested first order algorithms converge, in the sense that the function value of near-fixed points can get arbitrarily close to the solution of the original problem. We further show how these algorithms effectively implement particular recurrent convolutional neural networks (CNNs) that generalize feed-forward ones without introducing any parameters. We present and analyze different architectures resulting from unfolding the iterations of the proposed pursuit algorithms, including a new Learned ML-ISTA, providing a principled way to construct deep recurrent CNNs. Unlike other similar constructions, these architectures unfold a global pursuit holistically for the entire network. We demonstrate the emerging constructions in a supervised learning setting, consistently improving the performance of classical CNNs while maintaining the number of parameters constant.
机译:在建模和加工信息中,扩大的表示是普遍存在的。由最近的多层卷积稀疏编码(ML-CSC)模型的激励,我们在此方面概括了传统的基础追求问题到多层设置,在合成和分析之间的共生关系中引入不同表示层的类似稀疏强制处罚前锋。我们探讨了在实践中解决了这个新问题的不同迭代方法,我们提出了一种新的多层迭代软阈值算法(ML-ISTA),以及快速版本(ML-Fista)。我们表明,这些嵌套的一阶算法会聚,从此发挥近固定点的函数值可以任意接近原始问题的解决方案。我们进一步示出了这些算法如何有效地实现特定的经常性卷积神经网络(CNNS),在不引入任何参数的情况下概括前馈卷积神经网络(CNNS)。我们展示并分析了展开所提出的追求算法的迭代产生的不同架构,包括一个新的学习ML-ISTA,提供了构建深度复发性CNN的原则方法。与其他类似的结构不同,这些架构对整个网络全球追求全球追求。我们在监督学习环境中展示了新兴结构,始终如一地提高经典CNN的性能,同时保持参数常数的数量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号