Most decision analyses include continuous uncertainties (e.g., oil in place, oil price, or porosity). Analysts are frequentlyconcerned with how to best structure, compute, and communicate decision models under these circumstances. While decisiontrees are well suited for discrete random variables with a few possibilities, they become unmanageable for a large number ofoutcomes. To address this concern, analysts frequently use discrete approximations such as Swanson’s Mean. In this case,one approximates a continuous probability distribution by weighting the P10-P50-P90 fractiles by 0.30-0.40-0.30.Unfortunately, this method, and others like it, significantly underestimate the mean, variance, and skewness of mostdistributions—especially the lognormal, where its use is common. In this paper, we compare different discretizations withinthe context of a value of information problem and document the degree of error induced. We find that the best discretizationis dependent on the decision context, which is difficult to specify in advance. In addition, we contrast the use of discreteapproximations to Monte Carlo simulation, which many view as being more accurate. One must keep in mind, however, thatsimulation induces sampling error, while discretizations induce approximation error. The question is how many Monte Carlo(MC) trials it takes such that these two errors are equivalent. We find that it takes thousands, perhaps tens of thousands, ofMC trials to provide better results than simple discretization methods. This is quite important if one is using MC inconnection with a model that takes a long time to compute a single realization.
展开▼