Judgmental Adjustments to the Forecast

4 Min Read

So you think you can outsmart your statistical forecast? Apparently, lots of people do.

So you think you can outsmart your statistical forecast? Apparently, lots of people do.

In “Judgmental Adjustments to Forecasts in the New Economy” (Foresight, Issue 38 (Summer 2015), 31-36), Manzoor Chowdhury and Sonia Manzoor argue that forecasters are becoming more dependent on judgmental adjustments to a statistical forecast.

Sometimes this is because there isn’t sufficient data to generate trustworthy statistical forecast. For example, there may be no history for a new item, or limited history for items with a short product lifecycle. Or volume may be fragmented across complex and interconnected distribution channels. Or the immediate impact of social media (favorable or unfavorable) cannot be reliably determined.

Of course, the old standby reason for judgmental adjustments is when the statistical forecast does not meet the expectations of management. Executives may have “propriety information” they can’t share with the forecasters, so it cannot be included in the statistical models. Or they may be (mis-)using the forecast as a target or stretch goal (instead of what the forecast should be — a “best guess” at what is really going to happen).

Do You Have a Good Reason?

Does your boss (or higher level management) make you adjust the forecast? If so, that is probably a good enough reason to do so. But if they insist you make small adjustments, consider pushing back with the question, “What is the consequence of this adjustment — will it change any decisions?”

Even if directionally correct, a small adjustment that results in no change of actions is a waste of everyone’s time.

A large adjustment, presumably, will result in different decisions, plans, and actions. But will it result in better decisions, plans, and actions? In a study of four supply chain companies, Fildes and Goodwin (“Good and Bad Judgment in Forecasting,” Foresight, Issue 8 (Fall 2007), 5-10) found that any benefits to judgmental adjustments are “largely negated by excessive intervention and over-optimism.” In their sample, negative adjustments (lowering the forecast) tended to improve accuracy more than positive adjustments.

A Simple Test

As a simple test of your forecasting abilities, it should be easy to determine whether your adjustments are at least directionally correct.

Take a look at your historical forecasting performance data. (Every organization should be recording, at the very least,  the statistical forecast (generated by the forecasting software) and the final forecast (after adjustments and management approval), to compare to the actual that occurred. Much better is to also record the forecast at each sequential step in the forecasting process, such as statistical forecast, forecaster’s adjustment, consensus adjustment, and final (management approved) forecast.)

What percentage of the adjustments were directionally correct? If more than half then congratulations — you are doing better than flipping a coin!

Warning: Just be aware that you can make a directionally correct adjustment and still make the forecast worse. For example, statistical forecast=100, adjusted forecast=110, actual=101.

If you don’t keep or have access to historical data on your forecasts and actuals, then test yourself this way: Every morning before the opening bell, predict whether the Dow Jones Industrial Average (or your own favorite stock or stock market index) will end higher or lower than the previous day. It may not be as easy as you think.

Share This Article
Exit mobile version