首页> 外文会议>Congress of the Italian Association for Artificial Intelligence >Using the Hermite Regression Formula to Design a Neural Architecture with Automatic Learning of the 'Hidden' Activation Functions
【24h】

Using the Hermite Regression Formula to Design a Neural Architecture with Automatic Learning of the 'Hidden' Activation Functions

机译:使用Hermite回归公式设计具有自动学习“隐藏”激活功能的神经结构

获取原文

摘要

The value of the output function gradient of a neural network, calculated in the training Points, plays an essential role for its generalization capability. In this Paper a feed forward neural architecture (Net) that tan learn the activation function of its hidden units during the training Phase is presented. The automatic learning is obtained through the joint use of the Hermite regression formula and the CGD optimization algorithm with the Powell restart conditions. This technique leads to a smooth output function of Net in the nearby of the training Points, achieving an improvement of the generalization capability and the flexibility of the neural architecture. Experimental results, obtained comparing Net with traditional architectures with sigmoidal or sinusoidal activation functions, show that the former is very flexible and has good approximation and classification capabilities.
机译:在训练点计算的神经网络的输出函数梯度的值对其泛化能力起着重要作用。在本文中,提出了一种馈送前向神经结构(NET),即TAN在训练阶段期间学习其隐藏单元的激活功能。通过使用Hermite回归公式和CGD优化算法与Powell重启条件的CGD优化算法一起获得自动学习。该技术在附近的培训点中导致网络的平滑输出功能,实现了泛化能力的提高和神经结构的灵活性。实验结果,获得了与传统架构与符合赛元素或正弦激活功能的比较,表明前者非常灵活,具有良好的近似和分类能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号