首页> 外文期刊>Neural processing letters >Optimizing Deep Feedforward Neural Network Architecture: A Tabu Search Based Approach
【24h】

Optimizing Deep Feedforward Neural Network Architecture: A Tabu Search Based Approach

机译:优化深馈神经网络架构:基于禁忌搜索的方法

获取原文
获取原文并翻译 | 示例

摘要

The optimal architecture of a deep feedforward neural network (DFNN) is essential for its better accuracy and faster convergence. Also, the training of DFNN becomes tedious as the depth of the network increases. The DFNN can be tweaked using several parameters, such as the number of hidden layers, the number of hidden neurons at each hidden layer, and the number of connections between layers. The optimal architecture of DFNN is usually set using a trial-and-error process, which is an exponential combinatorial problem and a tedious task. To address this problem, we need an algorithm that can automatically design an optimal architecture with improved generalization ability. This work aims to propose a new methodology that can simultaneously optimize the number of hidden layers and their respective neurons for DFNN. This work combines the advantages of Tabu search and Gradient descent with a momentum backpropagation training algorithm. The proposed approach has been tested on four different classification benchmark datasets, which show better generalization ability of the optimized networks.
机译:深馈通馈神经网络(DFNN)的最佳架构对于其更好的精度和更快的融合至关重要。而且,随着网络的深度增加,DFNN的训练变得乏味。可以使用几个参数来调整DFNN,例如隐藏层的数量,每个隐藏层的隐藏神经元的数量以及层之间的连接数。 DFNN的最佳架构通常使用试验和错误过程来设置,这是指数组合问题和繁琐的任务。为了解决这个问题,我们需要一种可以自动设计具有改进的泛化能力的最佳架构的算法。这项工作旨在提出一种新的方法,可以同时优化隐藏层的数量及其各自的DFNN的神经元。这项工作结合了Tabu搜索和梯度下降的优点,具有势头反向训练算法。所提出的方法已经在四个不同的分类基准数据集上进行了测试,其显示出优化网络的更好的泛化能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号