A mixture of experts (ME) model provides a modular approach wherein component neural networks are made specialists on subparts of a problem. In this framework, that follows the "divide-and-conquer" philosophy, a gating network learns how to softly partition the input space into regions to be each properly modeled by one or more expert networks. In this paper, we investigate the application of different ME variants to some multivariate nonlinear dynamic systems identification problems which are known to be difficult to be dealt with. The aim is to provide a comparative performance analysis between variable settings of the standard, gated, and localized ME models with more conventional NN models.
展开▼