In this paper we present a survey on the application of recurrent neural networks to the task of statistical language modeling. Although it has been shown that these models obtain good performance on this task, often superior to other state-of-the-art techniques, they suffer from some important drawbacks, including a very long training time and limitations on the numberof context words that can be taken into account in practice. Recent extensionsto recurrent neural network models have been developed in an attempt to address these drawbacks. This paper gives an overview of the most important extensions. Each technique is described and its performance on statistical language modeling, as described in the existing literature, is discussed. Ourstructured overview makes it possible to detect the most promising techniquesin the field of recurrent neural networks, applied to language modeling, but italso highlights the techniques for which further research is required.
展开▼