首页> 外文学位 >Design methodology and stability analysis of recurrent neural networks for constrained optimization.
【24h】

Design methodology and stability analysis of recurrent neural networks for constrained optimization.

机译:约束优化的递归神经网络的设计方法和稳定性分析。

获取原文
获取原文并翻译 | 示例

摘要

Constrained optimization problems arise often in scientific research and engineering applications. Solving optimization problems using recurrent neural networks has been extensively investigated due to the advantages of massively parallel operations and rapid convergence. However, most existing recurrent neural networks have some limitations and disadvantages in computation or implementation except for a few neural networks for solving linear and quadratic programming problems.; In this thesis, we first describe the principles for designing recurrent neural networks, we then introduce three methods, with a deterministic procedure, for designing neural networks for constrained optimization. Two of them are used in designing continuous-time neural networks, and another is used in designing discrete-time neural networks. The advantage of the proposed method is fourfold. First, this method guarantees that any designed network is stable in the sense of Lyapunov and globally convergent. Thus the complex stability analysis for individual resulting network can be relaxed. Second, the derived neural networks can cope with problems with unbounded solution sets. Third, the gradient methods and non-gradient methods employed in existing optimization neural networks fall into the special cases of the proposed method, and thus the proposed method has more generality. Fourth, it may provide more alternative neural networks for simple implementation.; In the theoretical aspect, we prove that the proposed neural networks are globally convergent to exact solutions under the monotone and Lipschiz conditions of the mapping, under the conditions of monotonicity and symmetry of the mapping, or under the conditions of both the symmetric mapping and the bounded feasible set. Specially, the proposed neural networks are globally asymptotically stable when the solutions are unique. Furthermore, we prove their global exponential stability under the strongly monotone condition. In addition, we study the global stability of the Kennedy and Chua neural network so that the existing stability results are improved.; A new recurrent neural network, called the dual neural network, is presented. The number of neurons of the proposed dual network is equal to dimensionality of the workspace. Compared with the existing neural network for computing the inverse kinematics, the dual neural network has smaller size and desirable exponential stability. (Abstract shortened by UMI.)
机译:受限的优化问题经常出现在科学研究和工程应用中。由于大规模并行操作和快速收敛的优点,已经广泛研究了使用递归神经网络解决优化问题的方法。然而,除了少数用于解决线性和二次规划问题的神经网络外,大多数现有的递归神经网络在计算或实现上都存在一些局限性和缺点。在本文中,我们首先描述了设计递归神经网络的原理,然后介绍了确定性过程的三种方法,用于设计约束优化的神经网络。其中两个用于设计连续时间神经网络,另一个用于设计离散时间神经网络。所提出的方法的优点是四个方面。首先,此方法可确保任何设计的网络在Lyapunov的意义上是稳定的,并且在全球范围内收敛。因此,可以简化针对单个结果网络的复杂稳定性分析。其次,派生的神经网络可以解决无界解集的问题。第三,现有优化神经网络中采用的梯度法和非梯度法属于该方法的特例,因此该方法更具通用性。第四,它可以提供更多的替代神经网络来简化实现。从理论上讲,我们证明了所提出的神经网络在映射的单调和Lipschiz条件下,在映射的单调性和对称性条件下,或者在对称映射和映射条件下均全局收敛于精确解。有界可行集。特别地,当解决方案是唯一的时,所提出的神经网络是全局渐近稳定的。此外,我们证明了它们在强单调条件下的全局指数稳定性。此外,我们研究了肯尼迪和蔡氏神经网络的全局稳定性,从而改善了现有的稳定性结果。提出了一种新的递归神经网络,称为对偶神经网络。所提出的双重网络的神经元数量等于工作空间的维数。与现有的用于计算逆运动学的神经网络相比,双神经网络具有更小的尺寸和理想的指数稳定性。 (摘要由UMI缩短。)

著录项

  • 作者

    Xia, You-sheng.;

  • 作者单位

    Chinese University of Hong Kong (People's Republic of China).;

  • 授予单位 Chinese University of Hong Kong (People's Republic of China).;
  • 学科 Computer Science.; Engineering System Science.
  • 学位 Ph.D.
  • 年度 2000
  • 页码 165 p.
  • 总页数 165
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 自动化技术、计算机技术;系统科学;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号