...
首页> 外文期刊>Neurocomputing >Progressive Operational Perceptrons
【24h】

Progressive Operational Perceptrons

机译:渐进式操作感知器

获取原文
获取原文并翻译 | 示例
           

摘要

There are well-known limitations and drawbacks on the performance and robustness of the feed-forward, fully connected Artificial Neural Networks (ANNs), or the so-called Multi-Layer Perceptrons (MLPs). In this study we shall address them by Generalized Operational Perceptrons (GOPs) that consist of neurons with distinct (non-) linear operators to achieve a generalized model of the biological neurons and ultimately a superior diversity. We modified the conventional back-propagation (BP) to train GOPs and furthermore, proposed Progressive Operational Perceptrons (POPs) to achieve self-organized and depth-adaptive GOPs according to the learning problem. The most crucial property of the POPs is their ability to simultaneously search for the optimal operator set and train each layer individually. The final POP is, therefore, formed layer by layer and in this paper we shall show that this ability enables POPs with minimal network depth to attack the most challenging learning problems that cannot be learned by conventional ANNs even with a deeper and significantly complex configuration. Experimental results show that POPs can scale up very well with the problem size and can have the potential to achieve a superior generalization performance on real benchmark problems with a significant gain.
机译:前馈,完全连接的人工神经网络(ANN)或所谓的多层感知器(MLP)的性能和鲁棒性存在众所周知的局限性和缺点。在这项研究中,我们将通过广义操作感知器(GOP)来解决这些问题,这些感知器由具有不同(非线性)线性算子的神经元组成,以实现生物神经元的广义模型并最终实现更好的多样性。我们修改了传统的反向传播(BP)来训练GOP,此外,根据学习问题,提出了渐进式操作感知器(POP)以实现自组织和深度自适应的GOP。 POP的最关键特性是它们能够同时搜索最佳运营商集并分别训练每个层次。因此,最终的POP是逐层形成的,在本文中,我们将证明,此功能使具有最小网络深度的POP可以攻击传统ANN无法学习的最具挑战性的学习问题,即使配置更深,更复杂。实验结果表明,POPs可以很好地按问题规模进行扩展,并且有可能在具有显着收益的实际基准问题上实现出色的泛化性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号