首页> 外文会议>Neural nets Wirn Vietri-98 >Learning the Amplitude of Activation Functions in Layered Networks
【24h】

Learning the Amplitude of Activation Functions in Layered Networks

机译:学习分层网络中激活功能的幅度

获取原文
获取原文并翻译 | 示例

摘要

This paper introduces a novel algorithm to learn the amplitude of non-linear activation functions (of arbitrary analytical form) in layered networks. The algorithm is based on a steepest gradient-descent technique, and relies on the inductive proof of a theorem that involves the concept of expansion function of the activation associated to a given unit of the neural net. Experimental results obtained in a speaker normalization task with a mixture of Multilayer Perceptrons show a tangible 12.64% word error rate reduction with respect to the standard Back-Propagation training.
机译:本文介绍了一种新颖的算法来学习分层网络中非线性激活函数(任意解析形式)的幅度。该算法基于最陡峭的梯度下降技术,并依赖于一个定理的归纳证明,该定理涉及与神经网络的给定单元相关的激活的展开函数的概念。与多层感知器混合的说话人归一化任务中获得的实验结果表明,相对于标准的反向传播训练,其字错误率降低了12.64%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号