Belief networks encode joint probability distribution functions and can be used as fitness functions in genetic algorithms. Individuals in the genetic algorithm's population then represent instantiations, or explanations, in the belief network. Computing the most probable explanations (belief revision) is thus cast as a genetic algorithm search in the joint probability distribution space. At any time, the best fit individual in the genetic algorithm population is an estimate of the most probable explanation. This paper argues that joint probability distribution functions represented by belief networks typically are multimodal and highly variable. Thus the genetic algorithm techniques known as sharing and scaling should be of help. It is shown empirically that this is indeed the case, in particular that niching combined with scaling significantly improves the quality of a genetic algorithm's estimate of the most probable explanations. A novel scaling approach, root scaling, is also introduced.
展开▼