...
首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >A Two-Layer Recurrent Neural Network for Nonsmooth Convex Optimization Problems
【24h】

A Two-Layer Recurrent Neural Network for Nonsmooth Convex Optimization Problems

机译:非光滑凸优化问题的两层递归神经网络

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush–Kuhn–Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and -norm minimization problems.
机译:本文提出了一个两层递归神经网络来解决凸不等式和线性等式约束下的非光滑凸优化问题。与现有的神经网络模型相比,所提出的神经网络具有较低的模型复杂度并且避免了惩罚参数。事实证明,所提出的神经网络的状态在任何时候都在有限的时间内到达等价可行区域,然后一直停留在那里。此外,如果起始点位于相等可行区域内,则状态是唯一的。所提出的神经网络的平衡点集证明与原始优化问题的Karush-Kuhn-Tucker最优集等效。进一步证明,所提出的神经网络的平衡点在李雅普诺夫意义上是稳定的。而且,从任何初始点来看,状态都被证明收敛到所提出的神经网络的平衡点。最后,作为应用,提出的神经网络用于求解具有线性约束和-范数最小化问题的非线性凸规划。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号