For many applications feedforward neural networks have proved to be a valuable tool. Although the basic principles of employing such networks are quite straightforward, the problem of tuning their architectures to achieve near optimal performance still remains a very challenging task. Genetic algorithms may be used to solve this problem, since they have a number of distinct features that are useful in this context. First, the approach is quite universal and can be applied to many different types of neural networks or training criteria. It also allows network topologies to be optimized at various level of detail and can be used with many types of energy function, even those that are discontinuous or non-differentiable. Finally, a genetic algorithm need not be limited to simply adjusting patterns of connections, but, for example, can be utilized to select node transfer functions, weight values or to find architectures that perform best under certain simulated working conditions. In this paper we have investigated an application of genetic algorithms to feedforward neural network architecture design. These neural networks are used to model a nonlinear, discrete SISO system when only noisy training data are available. Additionally, some incidental but nonetheless important aspects of neural network optimization, such as complexity penalties or automatic topology simplification are discussed.
展开▼