This paper clarifies learning efficiency of a non-regular parametric model such as a neural network whose true parameter set is an analytic variety with singular points. By using Sato's b-function we rigorously prove that the free energy or the Bayesian stochastic complexity is asymptotically equal to lambda _1 log n - (m_1 - 1) log log n+constant, where lambda _1 is a rational number, m_1 is a natural number, and n is the number of training sample.s Also we show an algorithm to calculate lambda _1 and m_1 based on the resolution of singularity. In regular models, 2 lambda _1 is equal to the number of parameters and m_1=1, whereas in non-regular models such as neural networks, 2 lambda _1 is smaller than the number of parameters and m_1>=1.
展开▼