首页> 外文期刊>Neural Networks: The Official Journal of the International Neural Network Society >Analysis and test of efficient methods for building recursive deterministic perceptron neural networks.
【24h】

Analysis and test of efficient methods for building recursive deterministic perceptron neural networks.

机译:分析和测试构建递归确定性感知器神经网络的有效方法。

获取原文
获取原文并翻译 | 示例
           

摘要

The Recursive Deterministic Perceptron (RDP) feed-forward multilayer neural network is a generalisation of the single layer perceptron topology. This model is capable of solving any two-class classification problem as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets. For all classification problems, the construction of an RDP is done automatically and convergence is always guaranteed. Three methods for constructing RDP neural networks exist: Batch, Incremental, and Modular. The Batch method has been extensively tested and it has been shown to produce results comparable with those obtained with other neural network methods such as Back Propagation, Cascade Correlation, Rulex, and Ruleneg. However, no testing has been done before on the Incremental and Modular methods. Contrary to the Batch method, the complexity of these two methods is not NP-Complete. For the first time, a study on the three methods is presented. This study will allow the highlighting of the main advantages and disadvantages of each of these methods by comparing the results obtained while building RDP neural networks with the three methods in terms of the convergence time, the level of generalisation, and the topology size. The networks were trained and tested using the following standard benchmark classification datasets: IRIS, SOYBEAN, and Wisconsin Breast Cancer. The results obtained show the effectiveness of the Incremental and the Modular methods which are as good as that of the NP-Complete Batch method but with a much lower complexity level. The results obtained with the RDP are comparable to those obtained with the backpropagation and the Cascade Correlation algorithms.
机译:递归确定性感知器(RDP)前馈多层神经网络是单层感知器拓扑的概括。该模型能够解决任何两类分类问题,而单层感知器只能解决线性可分离集的分类问题。对于所有分类问题,RDP的构建都是自动完成的,并且始终保证收敛。存在三种构造RDP神经网络的方法:批处理,增量和模块化。批处理方法已经过广泛的测试,并且已显示出与其他神经网络方法(如反向传播,级联相关,Rulex和Ruleneg)获得的结果相当的结果。但是,之前尚未对增量和模块化方法进行测试。与Batch方法相反,这两种方法的复杂性不是NP-Complete。首次提出了对这三种方法的研究。通过比较在收敛时间,泛化程度和拓扑大小方面用三种方法构建RDP神经网络时获得的结果,本研究将突出每种方法的主要优点和缺点。使用以下标准基准分类数据集对网络进行了培训和测试:IRIS,SOYBEAN和威斯康星州乳腺癌。所获得的结果表明,增量和模块化方法的有效性与NP-Complete Batch方法一样好,但复杂度却低得多。通过RDP获得的结果与通过反向传播和级联相关算法获得的结果相当。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号