首页> 外文期刊>Neurocomputing >Training feedforward neural network via multiobjective optimization model using non-smooth L(1/2) regularization
【24h】

Training feedforward neural network via multiobjective optimization model using non-smooth L(1/2) regularization

机译:使用非平滑L(1/2)正则化通过多目标优化模型培训前馈神经网络

获取原文
获取原文并翻译 | 示例

摘要

The paper presents a new approach to optimize the Multilayer Perceptron Neural Network (MLPNN), to deal with the generalization problem. As known, most supervised learning algorithms aim to minimize the training error. However, the mentioned methods, based only on error minimizing, may generate a solution with an insufficient generalization performance. This present work proposes a multiobjective modelling problem involving two objectives: accuracy and complexity since the learning problem is multiobjective by nature. The learning task is carried on by minimizing both objectives simultaneously, according to Pareto domination concept, using NSGAII (Non-dominated Sorting Genetic Algorithm II) as a solver. This method leads us to a set of solutions called Pareto front, being the optimal solutions set, the adequate MLPNN need to be extracted. We show empirically that the proposed method is capable of reducing the neural networks topology and improved generalization performance, in addition to a good classification rate compared to different methods. (C) 2020 Published by Elsevier B.V.
机译:本文提出了一种新的方法来优化多层默认的神经网络(MLPNN),以处理泛化问题。如已知的,大多数监督的学习算法旨在最大限度地减少训练误差。然而,仅基于最小化误差的方法可以产生具有不足的泛化性能的解决方案。本工作提出了一种涉及两个目标的多目标建模问题:由于学习问题是由自然的多目标的准确性和复杂性。根据Pareto统治概念,使用NSGaii(非主导的分类遗传算法II)作为求解器,通过将两个目标最小化。该方法导致我们进入一组名为Paroto Front的解决方案,是最佳解决方案集,需要提取足够的MLPNN。我们凭经验展示了所提出的方法能够减少神经网络拓扑和改善的泛化性能,除了与不同的方法相比良好的分类速率。 (c)2020由elsevier b.v发布。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号