During past few years, a variety of methos have been developed for learning probabilistic networks from data, among which the heuristic single link forward or backward scarches are widely adopted to reduce the search space. a major drawback of these search heuristics is that they can not guarantee to converge to the right networks even if a sufficinetly large data set is available. This motivates us to explore a new algorithm that will not suffer from this problem. In this paper, we first identify an asymptotic property of differnet score metrics, based on which we then present a hybrid learning method that can be proved to be asymptotically convergent. We show that the algorithm, when employing hte information criterion and the Bayesian metric, guarantee to converge in a very general way and is computtionally feasible. evaluation of the algorithm with simulated data is given to demosntrate the capability of hte algorithm.
展开▼