首页> 外文会议> >A Recurrent Neural Network for Solving Nonconvex Optimization Problems
【24h】

A Recurrent Neural Network for Solving Nonconvex Optimization Problems

机译:求解非凸优化问题的递归神经网络

获取原文

摘要

An existing recurrent neural network for convex optimization is extended to solve nonconvex optimization problems. One of the prominent features of this neural network is the one-to-one correspondence between its equilibria and the Karush-Kuhn-Tucker (KKT) points of the nonconvex optimization problem. The conditions are derived under which the neural network (locally) converges to the KKT points. It is desired that the neural network is stable at minimum solutions, and unstable at maximum solutions or saddle solutions. It is found in the paper that most likely the neural network is unstable at the maximum solutions. Moreover, we found that if the derived conditions are not satisfied at minimum solutions, by transforming the original problem into an equivalent one with the p-power (or partial p-power) method, these conditions can be satisfied. As a result, the neural network will locally converge to a minimum solution. Finally, two illustrative examples are provided to demonstrate the performance of the recurrent neural network.
机译:扩展了现有的用于凸优化的递归神经网络,以解决非凸优化问题。该神经网络的突出特点之一是其均衡与非凸优化问题的Karush-Kuhn-Tucker(KKT)点之间一一对应。得出神经网络(局部)收敛到KKT点的条件。期望神经网络在最小解时是稳定的,而在最大解或鞍形时是不稳定的。在论文中发现,在最大解中,神经网络很可能是不稳定的。此外,我们发现,如果在最小解中不满足导出条件,则可以通过使用p乘方(或部分p乘方)方法将原始问题转换为等效问题来满足这些条件。结果,神经网络将局部收敛到最小解。最后,提供了两个说明性示例来演示递归神经网络的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号