The standard error and sampling distribution of robust estimates can, in principle, be estimated using the bootstrap. However, two problems arise when we want to use bootstrap with robust estimates on moderately large data sets: the bootstrap estimates may be unrealiable because the proportion of outliers in many bootstrap samples could be higher than that in the original data set, and the high computational demand of robust regression estimates may render the method unfeasible for moderately high-dimensional problems. Recently, Salibian-Barrera and Zamar (2002) have proposed a new bootstrap method called robust bootstrap to estimate the asymptotic distribution and asymptotic variance of MM-estimates. This method overcomes the problems mentioned above, namely: it is fast and stable (it can resist large proportion of outliers on the bootstrap samples). Unfortunately, its convergence seems to be only of order Op(1/(√n)). Another way to estimate the asymptotic variance of robust estimates on large datasets is to bootstrap a onestep Newton-Raphson iteration of their estimating equations. This method will typically be fast (and hence feasible on moderately large datasets). In this paper we compare the performance of this method with that of the robust bootstrap for the simple location-scale model.
展开▼