首页> 外文会议>Asia-Pacific Signal and Information Processing Association Annual Summit and Conference >Factorised Hidden Layer Based Domain Adaptation for Recurrent Neural Network Language Models
【24h】

Factorised Hidden Layer Based Domain Adaptation for Recurrent Neural Network Language Models

机译:递归神经网络语言模型的基于分解隐藏层的域自适应

获取原文

摘要

Language models, which are used in various tasks including speech recognition and sentence completion, are usually used with texts covering various domains. Therefore, domain adaptation has been a long-ongoing challenge in language model research. Conventional methods mainly work by the addition of a domain dependent bias. In this paper, we propose a novel way to adapt neural network-based language models. Our proposed approach relies on a linear combination of factorised hidden layers, which are learnt by the network. For domain adaptation, we use topic features from latent Dirichlet allocation. These features are input into an auxiliary network, and the output of this network is used to calculate the hidden layer weights. Both the auxiliary network and the main network can be trained jointly by error backpropagation. This makes our proposed approach completely unsupervised. To evaluate our method, we show results for the well-known Penn Treebank and the TED-LIUM dataset.
机译:在包括语音识别和句子完成在内的各种任务中使用的语言模型通常与覆盖各个领域的文本一起使用。因此,领域适应一直是语言模型研究中长期存在的挑战。常规方法主要通过添加依赖于域的偏差来工作。在本文中,我们提出了一种新的方法来适应基于神经网络的语言模型。我们提出的方法依赖于分解隐含层的线性组合,该隐含隐含层是由网络学习的。对于领域适应,我们使用潜在的狄利克雷分配中的主题特征。这些特征输入到辅助网络中,该网络的输出用于计算隐藏层权重。辅助网络和主网络都可以通过错误反向传播进行联合训练。这使得我们提出的方法完全不受监督。为了评估我们的方法,我们显示了著名的Penn Treebank和TED-LIUM数据集的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号