...
首页> 外文期刊>IEEE Transactions on Automatic Control >Minimum-seeking properties of analog neural networks withmultilinear objective functions
【24h】

Minimum-seeking properties of analog neural networks withmultilinear objective functions

机译:具有多线性目标函数的模拟神经网络的最小寻找性质

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this paper, we study the problem of minimizing a multilinear objective function over the discrete set {0, 1}n. This is an extension of an earlier work addressed to the problem of minimizing a quadratic function over {0, 1}n. A gradient-type neural network is proposed to perform the optimization. A novel feature of the network is the introduction of a so-called bias vector. The network is operated in the high-gain region of the sigmoidal nonlinearities. The following comprehensive theorem is proved: For all sufficiently small bias vectors except those belonging to a set of measure zero, for all sufficiently large sigmoidal gains, for all initial conditions except those belonging to a nowhere dense set, the state of the network converges to a local minimum of the objective function. This is a considerable generalization of earlier results for quadratic objective functions. Moreover, the proofs here are completely rigorous. The neural network-based approach to optimization is briefly compared to the so-called interior-point methods of nonlinear programming, as exemplified by Karmarkar's algorithm. Some problems for future research are suggested
机译:在本文中,我们研究了在离散集{0,1} n上最小化多线性目标函数的问题。这是先前工作的扩展,解决了在{0,1} n上最小化二次函数的问题。提出了一种梯度型神经网络进行优化。该网络的一个新颖特征是引入了所谓的偏向矢量。网络在S形非线性的高增益区域中运行。证明了以下综合定理:对于除属于零度量集的所有足够小的偏置矢量,对于所有足够大的S形增益,对于除无处密集集合的所有初始条件以外的所有初始条件,网络的状态收敛到目标函数的局部最小值。这是对二次目标函数的早期结果的相当大的概括。而且,这里的证明是完全严格的。将基于神经网络的优化方法与所谓的非线性规划的内点方法进行了简要比较,以Karmarkar的算法为例。提出了一些需要进一步研究的问题

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号