首页> 外文会议>The 2010 International Joint Conference on Neural Networks >A neurodynamic optimization approach to constrained sparsity maximization based on alternative objective functions
【24h】

A neurodynamic optimization approach to constrained sparsity maximization based on alternative objective functions

机译:基于替代目标函数的约束稀疏性最大化的神经动力学优化方法

获取原文

摘要

In recent years, constrained sparsity maximization problems received tremendous attention in the context of compressive sensing. Because the formulated constrained L0 norm minimization problem is NP-hard, constrained L1 norm minimization is usually used to compute approximate sparse solutions. In this paper, we introduce several alternative objective functions, such as weighted L1 norm, Laplacian, hyperbolic secant, and Gaussian functions, as approximations of the L0 norm. A one-layer recurrent neural network is applied to compute the optimal solutions to the reformulated constrained minimization problems subject to equality constraints. Simulation results in terms of time responses, phase diagrams, and tabular data are provided to demonstrate the superior performance of the proposed neurodynamic optimization approach to constrained sparsity maximization based on the problem reformulations.
机译:近年来,受约束的稀疏性最大化问题在压缩感测的背景下得到了极大的关注。由于公式化的约束L 0 范数最小化问题是NP-hard,因此约束L 1 范数最小化通常用于计算近似稀疏解。在本文中,我们介绍了一些替代的目标函数,例如加权L 1 范数,拉普拉斯函数,双曲正割和高斯函数,作为L 0 范数的近似值。应用一层递归神经网络来计算针对受等式约束的重新约束最小化问题的最优解。提供了有关时间响应,相位图和表格数据的仿真结果,以证明所提出的神经动力优化方法在基于问题重构的约束稀疏性最大化方面的优越性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号