首页> 外文期刊>Computers & operations research >Multilayer neural networks: an experimental evaluation of on-line training methods
【24h】

Multilayer neural networks: an experimental evaluation of on-line training methods

机译:多层神经网络:在线训练方法的实验评估

获取原文
获取原文并翻译 | 示例
           

摘要

Artificial neural networks (ANN) are inspired by the structure of biological neural networks and their ability to integrate knowledge and learning. In ANN training, the objective is to minimize the error over the training set. The most popular method for training these networks is back propagation, a gradient descent technique. Other non-linear optimization methods such as conjugate directions set or conjugate gradient have also been used for this purpose. Recently, metaheuristics such as simulated annealing, genetic algorithms or tabu search have been also adapted to this context. There are situations in which the necessary training data are being generated in real time and an extensive training is not possible. This "on-line" training arises in the context of optimizing a simulation. This paper presents extensive computational experiments to compare 12 "on-line" training methods over a collection of 45 functions from the literature within a short-term horizon. We propose a new method based on the tabu search methodology, which can compete in quality with the best previous approaches. Scope and purpose Artificial neural networks present a new paradigm for decision support that integrates knowledge and learning. They are inspired by biological neural systems where the nodes of the network represent the neurons and the arcs, the axons and dendrites. In recent years, there has been an increasing interest in ANN since they had proven very effectively in different contexts. In this paper we will focus on the prediction/estimation problem for a given function, where the input of the net is given by the values of the function variables and the output is the estimation of the function image. Specifically, we will consider the optimization problem that arises when training the net in the context of optimizing simulations (i.e. when the training time is limited). As far as we know, partial studies have been published, where a few training methods are compared over a limited set of instances. In this paper we present extensive computational experimentation of 12 different optimization methods over a set of 45 well-known functions.
机译:人工神经网络(ANN)受生物神经网络的结构及其整合知识和学习能力的启发。在人工神经网络训练中,目标是使训练集中的误差最小。训练这些网络最流行的方法是反向传播,一种梯度下降技术。其他非线性优化方法(例如共轭方向集或共轭梯度)也已用于此目的。近来,诸如模拟退火,遗传算法或禁忌搜索之类的元启发法也已经适应于这种情况。在某些情况下,实时生成必要的训练数据,而无法进行广泛的训练。这种“在线”培训是在优化仿真的背景下进行的。本文提出了广泛的计算实验,以比较短期内来自文献的45种功能对12种“在线”训练方法的比较。我们提出了一种基于禁忌搜索方法的新方法,该方法可以在质量上与以前的最佳方法竞争。范围和目的人工​​神经网络为整合知识和学习的决策支持提出了新的范例。它们受到生物神经系统的启发,其中网络的节点代表神经元和弧线,轴突和树突。近年来,由于人工神经网络已经在不同的环境中得到了有效的证明,因此人们对人工神经网络越来越感兴趣。在本文中,我们将重点关注给定函数的预测/估计问题,其中网络的输入由函数变量的值给出,而输出是函数图像的估计。具体来说,我们将考虑在优化模拟的情况下(即训练时间有限)训练网络时出现的优化问题。据我们所知,已经发表了部分研究,其中在有限的情况下比较了几种训练方法。在本文中,我们针对45种众所周知的函数集,对12种不同的优化方法进行了广泛的计算实验。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号