The problem of estimating parameters theta that determine the mean mu ( theta ) of a Gaussian-distributed observation X is considered. It is noted that the maximum-likelihood (ML) estimate-in this case, the least-squares estimate-has desirable statistical properties but can be difficult to compute when mu ( theta ) is a nonlinear function of theta . An estimate formed by combining ML estimates based on subsections of the data vector X is proposed as a computationally inexpensive alternative. It is shown that this alternative estimate, termed the divide-and-conquer estimate, has ML performance in the small-error region when the data vector X is appropriately subdivided. As an example application, an inexpensive range-difference-based position estimator is derived and shown by Monte-Carlo simulation to have small-error-region mean-square error equal to the Cramer-Rao bound.
展开▼