The Miracle of Combining Forecasts

5 Min Read
Shutterstock Licensed Photo - By Sergey Nivens

In 1947, in New York City, there was the Miracle on 34th Street. In 1980, at the Winter Olympics, there was the miracle on ice. In 1992, at the Academy Awards, there was the miracle of Marisa Tomei winning the Best Supporting Actress Oscar. And in 2014, on Wednesday afternoon this week, there was the miracle of getting off the SAS campus in the middle of winter storm Pax. There are also those “officially recognized” miracles that can land a person in sainthood. These frequently involve images burned into pancakes or grown into fruits and vegetables (e.g. the Richard Nixon eggplant). While I have little chance of becoming a saint, I have witnessed a miracle in the realm of business forecasting: the miracle of combining forecasts.

A Miracle of Business Forecasting

Last week’s installment of The BFD highlighted an interview with Greg Fishel, Chief Meteorologist at WRAL, on the topic of combined or “ensemble” models in weather forecasting. In this application, multiple perturbations of initial conditions (minor changes to temperature, humidity, etc.) are fed through the same forecasting model. If the various perturbations result in wildly different results, this indicates a high level of uncertainty in the forecast. If the various perturbations result in very similar results, the weather scientists consider this reason for good confidence in the forecast. In Fishel’s weather forecasting example, they create the ensemble forecast by passing multiple variations of the input data through the same forecasting model. This is different from typical business forecasting, where we feed the same initial conditions (e.g. a time series of historical sales) into multiple models. We then take a composite (e.g. an average) of the resulting forecasts, and that becomes our combined or ensemble forecast. In 2001, J. Scott Armstrong published a valuable summary of the literature in “Combining Forecasts” in his Principles of Forecasting. Armstrong’s work is referenced heavily in a recent piece by Graefe, Armstrong, Jones, and Cuzan in the International Journal of Forecasting (30 (2014) 43-54). Graefe et. al. remind us of the conditions under which combining is most valuable, and illustrate with an application to election forecasting. Since I am not much fond of politics or politicians, we’ll skip the elections part, but look at the conditions where combining can help:

  • “Combining is applicable to many estimation and forecasting problems. The only exception is when strong prior evidence exists that one method is best and the likelihood of bracketing is low” (p.44). [“Bracketing” occurs when one forecast was higher than the actual, and one was lower.] This suggests that combining forecasts should be our default method. We should only select one particular model when there is strong evidence it is best. However in most real-world forecasting situations, we cannot know in advance which forecast will be most accurate.
  • Combine forecasts from several methods. Armstrong recommended using at least five forecasts. These forecasts should be generated using methods that adhere to accepted forecasting procedures for the given situation.
  • “Combining forecasts is most valuable when the individual forecasts are diverse in the methods used and the theories and data upon which they are based” (p.45). Such forecasts are likely to include different biases and random errors — that we expect would help cancel each other out.
  • The larger the difference in the underlying theories or methods of component forecasts, the greater the extent and probability of error reduction through combining.
  • Weight the forecasts equally when you combine them. “A large body of analytical and empirical evidence supports the use of equal weights” (p.46). There is no guarantee that equal weights will produce the best results, but this is simple to do, easy to explain, and a fancier weighting method is probably not worth the effort.
  • “While combining is useful under all conditions, it is especially valuable in situations involving high levels of uncertainty” (p.51).

So forget about achieving sainthood the hard way. (If burning a charicature of Winston Churchill in a grilled cheese sandwich were easy, I’d be Pope by now). Instead, deliver a miracle to your organization the easy way — by combining forecasts. [For further discussion of combining forecasts in SAS forecasting software, see the 2012 SAS Global Forum paper “Combined Forecasts: What to Do When One Model Isn’t Good enough” by my colleagues Ed Blair, Michael Leonard, and Bruce Elsheimer.]

Share This Article
Exit mobile version