首页> 外文期刊>Soft computing: A fusion of foundations, methodologies and applications >An efficient neural network for solving convex optimization problems with a nonlinear complementarity problem function
【24h】

An efficient neural network for solving convex optimization problems with a nonlinear complementarity problem function

机译:一种高效的神经网络,用于求解非线性互补问题函数的凸优化问题

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, we present a one-layer recurrent neural network (NN) for solving convex optimization problems by using the Mangasarian and Solodov (MS) implicit Lagrangian function. In this paper by using Krush-Kuhn-Tucker conditions and MS function the NN model was derived from an unconstrained minimization problem. The proposed NN model is one layer and compared to the available NNs for solving convex optimization problems, which has a better performance in convergence time. The proposed NN model is stable in the sense of Lyapunov and globally convergent to optimal solution of the original problem. Finally, simulation results on several numerical examples are presented and the validity of the proposed NN model is demonstrated.
机译:在本文中,我们介绍了一种通过使用MagaserArd和Solodov(MS)隐式拉格朗日功能来解决凸优化问题的单层复发性神经网络(NN)。 本文通过使用KRUSH-KUHN-TUCKER条件和MS功能,NN模型来自无约束的最小化问题。 所提出的NN模型是一层,并与可用NNS进行比较,用于求解凸优化问题,在收敛时间内具有更好的性能。 拟议的NN模型在Lyapunov的意义上是稳定的,并且全球会聚到原始问题的最佳解决方案。 最后,提出了若干数值示例的仿真结果,并证明了所提出的NN模型的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号