A factored neural network estimates a conditional distribution of token probabilities using two smaller models, a class model and an index model. Every token has a unique class, and a unique index in the class. The two smaller models are trained independently but cooperate at inference time. Factoring with more than two models is possible. Networks can be recurrent. Factored neural networks for statistical language modelling treat words as tokens. In that context, classes capture linguistic regularities. Partitioning of words into classes keeps the number of classes and the maximum size of a class both low. Optimization of partitioning is by iteratively splitting and assembling classes.
展开▼