GPU is the dedicated circuit to draw the graphics, so it has a characteristic that the many simple arithmetic circuits are implemented. This characteristic is hoped to apply the massive parallelism not only graphic processing. In this paper, the neural network, one of the pattern recognition algorithms is applied to he faster by using GPU. In the learning of the neural network, there are many points to he processed at the same time. We propose a method which makes the neural network he parallelized in three points. The parallelizations are implemented in neural networks which have different initial weight coefficients, the learning patterns or neurons in a layer of neural network. These methods are used in combination, but the first method can be processed independently. Therefore one of the three methods, the first method, is employed as comparison to compare with the proposed methods. As the result, the proposed method is 6 times faster than comparison method.
展开▼