We analyse the performance of well-known evolutionary algorithms, the (1 + 1) EA and the (1 + similar to) EA, in the prior noise model, where in each fitness evaluation the search point is altered before the evaluation with probability p. We present refined results for the expected optimisation time of these algorithms on the function -LeadingOnes, where bits have to be optimised in sequence. Previous work showed that the (1 + 1) EA on LeadingOnes runs in polynomial expected time if p = O((log n)/n2) and needs superpolynomial expected time if p = similar to((log n)/n), leaving a huge gap for which no results were known. We close this gap by showing that the expected optimisation time is similar to(n2) . exp(similar to(min{pn2, n})) for all p = 1/2, allowing for the first time to locate the threshold between polynomial and superpolynomial expected times at p = similar to((log n)/n2). Hence the (1 + 1) EA on -LeadingOnes is surprisingly sensitive to noise. We also show that offspring populations of size similar to = 3.42 log n can effectively deal with much higher noise than known before. Finally, we present an example of a rugged landscape where prior noise can help to escape from local optima by blurring the landscape and allowing a hill climber to see the underlying gradient. We prove that in this particular setting noise can have a highly beneficial effect on performance.
展开▼