首页> 外文会议>Asia-Pacific Signal and Information Processing Association Annual Summit and Conference >Factorised Hidden Layer Based Domain Adaptation for Recurrent Neural Network Language Models
【24h】

Factorised Hidden Layer Based Domain Adaptation for Recurrent Neural Network Language Models

机译:基于隐藏的隐藏层的域改编,用于经常性神经网络语言模型

获取原文

摘要

Language models, which are used in various tasks including speech recognition and sentence completion, are usually used with texts covering various domains. Therefore, domain adaptation has been a long-ongoing challenge in language model research. Conventional methods mainly work by the addition of a domain dependent bias. In this paper, we propose a novel way to adapt neural network-based language models. Our proposed approach relies on a linear combination of factorised hidden layers, which are learnt by the network. For domain adaptation, we use topic features from latent Dirichlet allocation. These features are input into an auxiliary network, and the output of this network is used to calculate the hidden layer weights. Both the auxiliary network and the main network can be trained jointly by error backpropagation. This makes our proposed approach completely unsupervised. To evaluate our method, we show results for the well-known Penn Treebank and the TED-LIUM dataset.
机译:语言模型,用于包括语音识别和句子完成的各种任务,通常与覆盖各个域的文本一起使用。因此,域适应在语言模型研究中一直是一项长期挑战。传统方法主要通过添加域依赖偏差来工作。在本文中,我们提出了一种改进基于神经网络的语言模型的新方法。我们所提出的方法依赖于由网络学习的分解隐藏层的线性组合。对于域适应,我们使用潜在Dirichlet分配的主题特征。这些功能被输入到辅助网络中,并且该网络的输出用于计算隐藏的层权重。辅助网络和主网络都可以通过错误反向化共同培训。这使得我们提出的方法完全无人监督。为了评估我们的方法,我们向众所周知的Penn TreeBank和TED Lium数据集显示结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号