This paper introduces modified backpropagation algorithms for training multilayer feedforward neural networks with hard-limiting neurons. Transforming neuron activation functions are used in all the hidden layers, which are the modified continuous sigmoidal functions with an adaptive steepness factors. In the training process, this steepness factor varies from a small positive value to infinite with the decrease of the sum square error. Thus, a multilayer feedforward neural network can be trained with the resultant architecture only composed of hard limiting neurons. The learning algorithm is similar to the conventional backpropagation algorithm. Only the derivatives of the hidden neural activation functions are modified. Extensive numerical simulations are presented to show the feasibility of the proposed algorithm. In addition, the numerical properties of the proposed algorithm are discussed and comparisons of the proposed algorithm with other algorithms are made.
展开▼