Most of the algorithms for training re-stricted Boltzmann machines (RBM) are based on Gibbs sampling. When the sampling algorithm is used to calcu-late the gradient, the sampling gradient is the approximate value of the true gradient and there is a big error between the sampling gradient and the true gradient, which seri-ously affects the training effect of the network. Aiming at this problem, this paper analysed the numerical error and orientation error between the approximate gradient and the true gradient. Their influence on the performance of network training is given then. An gradient fixing model was established. It was designed to adjust the numerical value and orientation of the approximate gradient and re-duce the error. We also designed gradient fixing based Gibbs sampling training algorithm (GFGS) and gradient fixing based parallel tempering algorithm (GFPT), and the comparison experiment of the novel algorithms and the ex-isting algorithms is given. It has been demonstrated that the new algorithms can effectively tackle the issue of gra-dient error, and can achieve higher training accuracy at a reasonable expense of computational runtime.
展开▼