Language models, which are used in various tasks including speech recognition and sentence completion, are usually used with texts covering various domains. Therefore, domain adaptation has been a long-ongoing challenge in language model research. Conventional methods mainly work by the addition of a domain dependent bias. In this paper, we propose a novel way to adapt neural network-based language models. Our proposed approach relies on a linear combination of factorised hidden layers, which are learnt by the network. For domain adaptation, we use topic features from latent Dirichlet allocation. These features are input into an auxiliary network, and the output of this network is used to calculate the hidden layer weights. Both the auxiliary network and the main network can be trained jointly by error backpropagation. This makes our proposed approach completely unsupervised. To evaluate our method, we show results for the well-known Penn Treebank and the TED-LIUM dataset.
展开▼