The Perils of Forecasting Benchmarks

3 Min Read

Benchmarks of forecasting performance are available from several sources, including professional organizations and journals, academic research, and private consulting/benchmarking organizations. But there are several reasons why industry forecasting benchmarks should not be used for setting your own forecasting performance objectives.

1) Can you trust the data?

Benchmarks of forecasting performance are available from several sources, including professional organizations and journals, academic research, and private consulting/benchmarking organizations. But there are several reasons why industry forecasting benchmarks should not be used for setting your own forecasting performance objectives.

1) Can you trust the data?

Are the numbers based on rigorous audits of company data or responses to a survey? If they are based on unaudited survey responses, do the respondents actually know the answers or are they just guessing?

2) Is measurement consistent across the respondents?

Are all organizations forecasting at the same level of granularity, such as by product, customer or region? Are they forecasting in the same time interval, such as weekly or monthly? Are they forecasting by the same lead time offset, such as three weeks or three months in advance? Are they using the same metric? It is important to note that even metrics as similar sounding as MAPE, weighted MAPE, and symmetric MAPE can deliver very different values from the same data.

3) Finally, and most important, is the comparison relevant?

Does the benchmark company have equally forecastable data?

Consider this worst-case example:

Suppose a benchmark study shows that Company X has the lowest forecast error. Consultants and academics then converge on Company X to study its forecasting process and publish reports touting Company X’s best practices. You read these reports and begin to copy Company X’s best practices at your own organization.

However, upon further review using FVA analysis, it is discovered that Company X had very easy-to-forecast demand, and it would have had even lower error if it had just used a naive forecast. In other words, Company X’s so-called best practices just made the forecast worse.

This example is not far-fetched. Organizations at the top of the benchmark lists are probably there because they have the easiest-to-forecast demand. Many organizational practices, even purported best practices, may only make the forecast worse.

Benchmarks tell you the accuracy that best-in-class companies are able to achieve. But…they do not tell you whether their forecasting environment is similar to yours or worthy of your admiration. Without that information, industry benchmarks are largely irrelevant and should not be used to evaluate your performance or set performance objectives.

 

Share This Article
Exit mobile version