首页> 外文会议>Workshop on neural nets >Learning the Amplitude of Activation Functions in Layered Networks
【24h】

Learning the Amplitude of Activation Functions in Layered Networks

机译:在分层网络中学习激活函数的幅度

获取原文

摘要

This paper introduces a novel algorithm to learn the amplitude of non-linear activation functions (of arbitrary analytical form) in layered networks. The algorithm is based on a steepest gradient-descent technique, and relies on the inductive proof of a theorem that involves the concept of expansion function of the activation associated to a given unit of the neural net. Experimental results obtained in a speaker normalization task with a mixture of Multilayer Perceptrons show a tangible 12.64% word error rate reduction with respect to the standard Back-Propagation training.
机译:本文介绍了一种新的算法,用于学习分层网络中的非线性激活功能(任意分析形式)的幅度。该算法基于陡峭的梯度 - 下降技术,并且依赖于定理的感应证明,其涉及与神经网络的给定单元相关联的激活概念的概念。在扬声器归一化任务中获得的实验结果,具有多层感知者的混合,显示了关于标准回波传播训练的有形12.64%的字错误率降低。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号