首页> 外文期刊>Neural Networks, IEEE Transactions on >Finite-Time Convergent Recurrent Neural Network With a Hard-Limiting Activation Function for Constrained Optimization With Piecewise-Linear Objective Functions
【24h】

Finite-Time Convergent Recurrent Neural Network With a Hard-Limiting Activation Function for Constrained Optimization With Piecewise-Linear Objective Functions

机译:分段硬目标函数约束优化的带硬限制激活函数的有限时间收敛递归神经网络

获取原文
获取原文并翻译 | 示例

摘要

This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
机译:本文提出了一种单层递归神经网络,用于解决一类具有分段线性目标函数的约束非光滑优化问题。所提出的神经网络保证在有限的时间内在温和条件下在模型中单个增益参数的下限导出的情况下全局收敛到最优解。神经网络中神经元的数量与优化问题的决策变量的数量相同。与现有的用于优化的神经网络相比,所提出的神经网络具有几个显着特征,例如有限时间收敛和低模型复杂度。还介绍了两种重要的特殊情况的具体模型,即线性规划和非平滑优化。此外,通过仿真结果讨论了最短路径问题和约束的最小绝对偏差问题的应用,以证明所提出的神经网络的有效性和特征。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号