首页> 外文期刊>Journal of intelligent & fuzzy systems: Applications in Engineering and Technology >Training contextual neural networks with rectifier activation functions: Role and adoption of sorting methods
【24h】

Training contextual neural networks with rectifier activation functions: Role and adoption of sorting methods

机译:培训具有整流激活功能的上下文神经网络:角色和采用排序方法

获取原文
获取原文并翻译 | 示例
           

摘要

Contextual neural networks are effective and very usable machine learning models being generalization of multilayer perceptron. They allow to solve classification problems with high accuracy while strongly limiting activity of connections between hidden neurons. Within this article we present novel study of properties of contextual neuronal networks with Hard and Exponential Rectifier activation functions and of their influence on behavior of the Generalized Error Backpropagation method. It is used to show how to optimize efficiency of the sorting phase of this algorithm when applied to train evaluated models. This considerably extends our previous related paper which was limited to analysis of contextual neuronal networks with Leaky Rectifier and Sigmoidal activation functions. This article includes wide description of contextual neural networks and generalized error backpropagation algorithm as well as the discussion of their connection with self-consistency paradigm, which is frequently used in quantum physics. Also the relation of the latter with sorting methods and considered rectifier functions during training of contextual neural networks is studied in details. Conclusions are backed up by the results of performed experiments. Reported outcomes of simulations confirm the ability of contextual neural networks to limit activity of connections between their neurons and - what is more important - indicate the detailed rules of selection of the most efficient sorting algorithm for updating scan-paths of contextual neurons that are using Hard and Exponential Rectifier activation functions. Presented results have considerable value both for research and practical applications - especially where the efficiency of training of contextual neural networks is crucial.
机译:语境神经网络是有效的,非常可用的机器学习模型是多层erceptron的泛化。它们允许以高精度解决分类问题,同时强烈限制隐藏神经元之间的连接活动。在本文中,我们提出了具有艰难和指数整流器激活功能的语境神经网络的性质研究以及它们对广义误差背交方法的影响。它用于展示如何在应用于训练评估模型时优化该算法的排序阶段的效率。这显着扩展了我们以前的相关论文,其仅限于对具有泄漏整流器和S形激活功能的上下文神经元网络的分析。本文包括广泛描述的上下文神经网络和广义误差反复化算法以及与自我一致性范例的连接的讨论,它们经常在量子物理学中使用。还研究了后者的关系以及在培训中,描述了在上下文神经网络训练期间的整流功能。结论通过进行实验的结果来支持。报告的模拟结果证实了语境神经网络限制了神经元之间连接活动的能力和 - 更重要的是 - 表明用于更新使用硬的上下文神经元的扫描路径的最有效排序算法的详细选择规则和指数整流激活功能。呈现的结果对于研究和实际应用具有相当大的价值 - 特别是在培训中语境神经网络的效率至关重要的情况下。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号