首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Combination of Recurrent Neural Networks and Factored Language Models for Code-Switching Language Modeling
【24h】

Combination of Recurrent Neural Networks and Factored Language Models for Code-Switching Language Modeling

机译:递归神经网络和因式语言模型的组合,用于代码转换语言建模

获取原文

摘要

In this paper, we investigate the application of recurrent neural network language models (RNNLM) and factored language models (FLM) to the task of language modeling for Code-Switching speech. We present a way to integrate part-of-speech tags (POS) and language information (LID) into these models which leads to significant improvements in terms of perplexity. Furthermore, a comparison between RNNLMs and FLMs and a detailed analysis of perplexities on the different backoff levels are performed. Finally, we show that recurrent neural networks and factored language models can be combined using linear interpolation to achieve the best performance. The final combined language model provides 37.8% relative improvement in terms of perplexity on the SEAME development set and a relative improvement of 32.7% on the evaluation set compared to the traditional n-gram language model.
机译:在本文中,我们研究了递归神经网络语言模型(RNNLM)和分解语言模型(FLM)在代码转换语音的语言建模任务中的应用。我们提出了一种将词性标签(POS)和语言信息(LID)集成到这些模型中的方法,这将导致困惑方面的显着改善。此外,还对RNNLM和FLM进行了比较,并对不同退避级别上的困惑进行了详细分析。最后,我们表明可以使用线性插值将递归神经网络和因式语言模型结合起来,以达到最佳性能。与传统的n-gram语言模型相比,最终的组合语言模型在SEAME开发集的困惑方面提供了37.8%的相对改进,在评估集上提供了32.7%的相对改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号