Hidden Markov models (HMMs) are widely applied to analysis of time-dependent data sequences, such as non-linear signal processing, natural language processing, and bioinformatics. There are two possible formats of the training data in HMMs; One is a set of many time-dependent sequential data, and the other is an infinitely long sequence. The learning process is one of the main concerns in machine learning. For the former case, there is a method to reveal the generalization ability based on algebraic geometry. However, there is no theoretical analysis on the latter case. To construct its foundation, this paper reports some unique properties of the likelihood function in an experimental manner, and explains them in a theoretical manner. The results show that the likelihood function implicitly includes a local maxima factor, which can make the learning process slow, and that this slow learning realizes high performance in a stationary state evaluation.
展开▼