Forecasting: Evaluation Criteria

October 16, 2012
167 Views

To continue our series on forecasting, let’s discuss one of the varying factors: the evaluation criteria. In classification, the percentage of accuracy is often used. It is obvious and easy to interpret. In the case of regression (e.g. forecasting), this is more complex.

To continue our series on forecasting, let’s discuss one of the varying factors: the evaluation criteria. In classification, the percentage of accuracy is often used. It is obvious and easy to interpret. In the case of regression (e.g. forecasting), this is more complex.

Whatever the application and the prediction method used, at one point, performances need to be evaluated. One motivation to evaluate results is to choose the most appropriate forecasting algorithm. Another one is to avoid overfitting. Thus, choosing the right criterion for your problem is a key step. In this post, we will focus on three accuracy measures.

The Root Mean Square Error (RMSE) is certainly the most used measure. It is mainly due to its simplicity and usage in other domains. Its equation is given below:

forRMSE
The main drawback of RMSE is to be scale dependent. It is thus not possible to compare two different time series. The second one is the Mean Absolute Percentage Error (MAPE). It is scale independent:

forMAPE
Its main issue is to be undefined when the denominator is null. This may happen often with intermittent data. The third error measure is the Mean Absolute Scaled Error (MASE). The naïve forecast (last value) can be used as the denominator:

forMASE
The measure is scale independent and if below 1, better than naïve forecast (a good benchmark).

What error measure do you use and why? Post a comment to share your opinion.