首页> 外文学位 >Development of optimal network structures for back-propagation-trained neural networks
【24h】

Development of optimal network structures for back-propagation-trained neural networks

机译:反向传播训练神经网络的最佳网络结构的开发

获取原文
获取原文并翻译 | 示例

摘要

A critical question in the neural network research today concerns how many hidden neurons to use. There is no magic formula because it seems to be largely dependent upon the complexity of the problem being solved. The potential performance impact of hidden layers and neurons must be taken into consideration in the network development process.;This study focuses mainly on how to develop an optimal neural network model for a specific task. In other words, for a given task, it is desired to find a neural network structure which has a minimal number of layers, minimal number of units in each layer, and good generalization ability. A process to build an optimal network structure is proposed in this study. The core of this process is the direct weight pruning method. This method is based on mathematical deduction and the property of the dominant subnet of a network that is trained by a back-propagation algorithm with normalized input data. The smallest magnitude weight in the trained network is pruned sequentially. After no further pruning is possible, the isolated units of the network are deleted thus simplifying the original trained network.;The proposed process is evaluated using two common benchmark problems: XOR and Parity. It is demonstrated that the new pruning method produces the optimal network models while being both simple and efficient. The process is also evaluated using a real-world application problem: firm bankruptcy prediction. The performance of the neural network is compared to that of multivariate discriminant analysis models for matched bankruptcy samples. The neural network structure produced by the proposed process offers a superior modeling approach for firm bankruptcy prediction.
机译:今天,神经网络研究中的一个关键问题涉及要使用多少个隐藏神经元。没有魔术公式,因为它似乎很大程度上取决于所解决问题的复杂性。在网络开发过程中必须考虑隐藏层和神经元对性能的潜在影响。本研究主要集中于如何为特定任务开发最佳的神经网络模型。换句话说,对于给定的任务,期望找到一种神经网络结构,其具有最少的层数,每层中的单元数最少以及良好的泛化能力。本研究提出了一种构建最佳网络结构的过程。此过程的核心是直接权重修剪方法。此方法基于数​​学推论和基于反向传播算法使用归一化输入数据训练的网络的主要子网的属性。顺序修剪训练网络中最小的权重。在无法进一步修剪之后,将删除网络中隔离的单元,从而简化原始训练后的网络。拟议的过程将使用两个常见的基准测试问题进行评估:异或和奇偶校验。结果表明,新的修剪方法既简单又高效,可以生成最佳的网络模型。还使用实际应用问题(公司破产预测)对流程进行评估。对于匹配的破产样本,将神经网络的性能与多元判别分析模型的性能进行了比较。所提出的过程所产生的神经网络结构为企业破产预测提供了一种出色的建模方法。

著录项

  • 作者

    Guan, Qing.;

  • 作者单位

    The University of Nebraska - Lincoln.;

  • 授予单位 The University of Nebraska - Lincoln.;
  • 学科 Management.;Computer science.;Artificial intelligence.
  • 学位 Ph.D.
  • 年度 1993
  • 页码 176 p.
  • 总页数 176
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号