Extending research by the authors on intelligence forecasting, the forecasting skill of 3622 geopolitical forecasts extracted from strategic intelligence reports was examined. The codable subset of forecasts (N = 2013) was expressed with verbal probabilities (e.g., likely) and translated to numeric probability equivalents. This subset showed very good calibration and discrimination, but also underconfidence. There was no support for the hypothesis that forecasting skill was good mainly because of the general ease of forecasting topics. First, forecasting skill was as good among authoritative key judgments as in the general set. Second, forecasts that were assigned high degrees of certainty, indicative of ease, (p ≤ 0.05 or p ≥ 0.95) did not discriminate as well as less certain forecasts (0.05 < p < 0.95), and these subsets did not differ in calibration. Sensitivity and benchmarking tests further revealed that if the 1609 uncodable forecasts were all assigned forecast probabilities of .5 (i.e., if all followed a "cautious ignorance" rule), skill characteristics would still show a large effect size improvement over a variety of guesswork strategies. The findings support a cautiously optimistic assessment of forecasting skill in strategic intelligence and indicate that such skill is not primarily attributable to the selection of easy forecasting topics. However, the large proportion of uncodable cases suggests that intelligence forecasts could be improved by avoiding imprecise language that affects not only the codability but also, in all likelihood, the interpretability and indicative value of forecasts for intelligence consumers.

Additional Metadata
Keywords Forecasting, Intelligence analysis, Judgment, Prediction, Skill
Persistent URL dx.doi.org/10.1002/bdm.2055
Journal Journal of Behavioral Decision Making
Mandel, D.R. (David R.), & Barnes, A. (2017). Geopolitical Forecasting Skill in Strategic Intelligence. Journal of Behavioral Decision Making. doi:10.1002/bdm.2055