...
首页> 外文期刊>Neurocomputing >Perceptron training algorithms designed using discrete-time control Liapunov functions
【24h】

Perceptron training algorithms designed using discrete-time control Liapunov functions

机译:使用离散时间控制Liapunov函数设计的感知器训练算法

获取原文
获取原文并翻译 | 示例
           

摘要

Perceptrons, proposed in the seminal paper McCulloch-Pitts of 1943, have remained of interest to neural network community because of their simplicity and usefulness in classifying linearly separable data and can be viewed as implementing iterative procedures for "solving" linear inequalities. Gradient descent and conjugate gradient methods, normally used for linear equalities, can be used to solve linear inequalities by simple modifications that have been proposed in the literature but not been analyzed completely. This paper applies a recently proposed control-inspired approach to the design of iterative steepest descent and conjugate gradient algorithms for perceptron training in batch mode, by regarding certain parameters of the training/algorithm as controls and then using a control Liapunov technique to choose appropriate values of these parameters.
机译:1943年开创性论文McCulloch-Pitts中提出的感知器,由于其对线性可分离数据进行分类的简单性和实用性,一直受到神经网络社区的关注,可以视为实现“求解”线性不等式的迭代过程。通常用于线性等式的梯度下降法和共轭梯度法可用于通过文献中已提出但未进行完整分析的简单修改来解决线性不等式。本文将最近提出的控制启发式方法用于批处理模式下感知器训练的迭代最速下降和共轭梯度算法的设计,方法是将训练/算法的某些参数作为控制,然后使用控制Liapunov技术选择合适的值这些参数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号