The present paper deals with monotonic and dual-monotonic probabilistic identification of indexed families of uniformly recursive languages from positive data. In particular, we consider the special case where the probability is equal to 1. Earlier results in the field of probabilistic identification established that - considering function identification - each collection of recursive functions identifiable with robability p>1/2 is deterministically identifiable (cf. [23]). In the case of language learning from text, each collection of recursive languages identifiable from text with probability p>2/3 is deterministically identifiable (cf.[20]). In particular, we have no gain of learning power when the collections of functions or languages are claimed to be inferred with probability p=1. As shown in [18], we receive high structured probabilistic hierarchies when dealing with probabilistic learning under monotonicity constraints. In this paper, we consider monotonic and dual monotonic probabilistic learning of indexed families with respect to proper, class preserving and class comprising hypothesis spaces. In particular, we can prove for proper monotonic as well as for proper dual monotonic learning that probabilistic learning is more powerful than deterministic learning even if the probability is claimed to be 1. To establish this result, we need a sophisticated version of the proof technique developed in [17].
展开▼