首页> 美国卫生研究院文献>Computational Intelligence and Neuroscience >Deep Neural Networks with Multistate Activation Functions
【2h】

Deep Neural Networks with Multistate Activation Functions

机译:具有多状态激活功能的深度神经网络

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

We propose multistate activation functions (MSAFs) for deep neural networks (DNNs). These MSAFs are new kinds of activation functions which are capable of representing more than two states, including the N-order MSAFs and the symmetrical MSAF. DNNs with these MSAFs can be trained via conventional Stochastic Gradient Descent (SGD) as well as mean-normalised SGD. We also discuss how these MSAFs perform when used to resolve classification problems. Experimental results on the TIMIT corpus reveal that, on speech recognition tasks, DNNs with MSAFs perform better than the conventional DNNs, getting a relative improvement of 5.60% on phoneme error rates. Further experiments also reveal that mean-normalised SGD facilitates the training processes of DNNs with MSAFs, especially when being with large training sets. The models can also be directly trained without pretraining when the training set is sufficiently large, which results in a considerable relative improvement of 5.82% on word error rates.
机译:我们提出了用于深度神经网络(DNN)的多状态激活函数(MSAF)。这些MSAF是新型的激活函数,能够代表两个以上的状态,包括N阶MSAF和对称MSAF。具有这些MSAF的DNN可以通过常规的随机梯度下降(SGD)以及均值标准化的SGD进行训练。我们还将讨论这些MSAF在解决分类问题时的性能。 TIMIT语料库的实验结果表明,在语音识别任务上,具有MSAF的DNN的性能要优于传统DNN,在音素错误率方面的相对提高是5.60%。进一步的实验还表明,均值归一化的SGD有助于使用MSAF进行DNN的训练过程,特别是在具有大型训练集的情况下。当训练集足够大时,也可以直接训练模型而无需进行预训练,这导致单词错误率的相对相对改善了5.82%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号