Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    unusual trading activity
    Signal Or Noise? A Decision Tree For Evaluating Unusual Trading Activity
    3 Min Read
    software developer using ai
    How Data Analytics Helps Developers Deliver Better Tech Services
    8 Min Read
    ai for stock trading
    Can Data Analytics Help Investors Outperform Warren Buffett
    9 Min Read
    media monitoring
    Signals In The Noise: Using Media Monitoring To Manage Negative Publicity
    5 Min Read
    data analytics
    How Data Analytics Can Help You Construct A Financial Weather Map
    4 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: More on Forecasting Benchmarks
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Analytics > Predictive Analytics > More on Forecasting Benchmarks
Predictive Analytics

More on Forecasting Benchmarks

mvgilliland
mvgilliland
3 Min Read
SHARE

The Perils Revisited

A few posts ago I warned of the perils of forecasting benchmarks, and why they should not be used to set your forecasting performance objectives:

Contents
  • The Perils Revisited
  • The Perils Revisited
  • Benchmark Study on Forecasting Blog

The Perils Revisited

A few posts ago I warned of the perils of forecasting benchmarks, and why they should not be used to set your forecasting performance objectives:

  1. Can you trust the data?
  2. Is measurement consistent across the respondents?
  3. Is the comparison relevant?

In addition to a general suspicion about unaudited survey responses, my biggest concern is the relevance of such comparisons. If company A has smooth, stable, and easy-to-forecast demand, and company B has wild, erratic, and difficult-to-forecast demand, then the forecasters at these two companies should be held to different standards of performance. It makes no sense to hold them to some “industry benchmark” which may be trivial for company A to achieve, and impossible for B.

Perhaps the only reasonable standard is to compare an organization’s forecasting performance against what a naive or other simple model would be able to achieve with their data. Thus, if a random walk model can forecast with a MAPE of 50%, then I should expect the organization’s forecasting process to do no worse than this.

More Read

DIALOG Product Roadmap (not really)
Quality and warranty cost reduction strategies
The Anderson Analytics Facebook Application in Advertising Age
War Games: A New Type of Competitive Analytical Tool
PMML 4.0 is here!

If the process consistently forecasted worse than a random walk, we know there must be something terribly wrong with it!

Benchmark Study on Forecasting Blog

One of the forecasting blogs I enjoy is the aptly named Forecasting Blog, published by Mark Chockalingam’s Demand Planning, LLC. Last week it reported on results from a forecasting benchmark survey covering (among other things) the forecast error metric used, and forecast error results.

Unsurprisingly, they found that 67% of respondents used MAPE or weighted MAPE (WMAPE) as their error metric. Less commonly used error metrics were % of Forecasts Within +/- x% of Actuals, Forecast Bias, and Forecast Attainment (Actual/Forecast).

The blog also reported Average of Forecasting Error by Industry (e.g. 39% in CPG, and 36% in Chemicals). However, it was unclear how this average error was computed, and I suspect Peril #2 (Is the measurement consistent across respondents?) may be violated.

It is well known that the same data can give very different results even for metrics as similar sounding as MAPE and WMAPE. If different companies are using different metrics to compute their forecast error, I’m not sure how you would combine them into an industry average.

Take a look at the blog post for yourself, and spend a few minutes to take their Forecast Error Benchmark Survey.

 
 

TAGGED:error metricsforecasting benchmarksForecasting Blogforecasting surveyMark Chockalingam
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

0622cae5 f7d7 4f74 84b5 eabd1a823dca
How Data-Driven Grocery Recommendations Help Shoppers Eat Better With Less Effort
Big Data Exclusive
business recovering from data loss
How Data-Driven Businesses Protect MySQL Databases from Shutdown
Big Data Exclusive
ai driven task management
Reducing “Work About Work” with AI Task Managers
Artificial Intelligence Exclusive
data center uptime
Why Rodent-Resistant Conduits Are Critical for Data Center Uptime
Big Data Data Management Exclusive Risk Management

Stay Connected

1.2KFollowersLike
33.7KFollowersFollow
222FollowersPin

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai is improving the safety of cars
From Bolts to Bots: How AI Is Fortifying the Automotive Industry
Artificial Intelligence
data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?