Forecast Accuracy: Keep Your Demand Management Process Honest

addtoany linkedin

Our partner Celestica recently published the following article, Are you keeping your demand management process honest? The author, Eric C. Lange, Director of Demand Planning and S&OP Services at Celestica, examines forecast accuracy and the main components of a demand management measurement tool and process. We’ve outlined his recommendations below so you can help improve your forecast accuracy, leading to improved business operations and ultimately greater success.

Reporting Forecast Accuracy

Even with an established Sales and Operations Planning (S&OP) process, if you’re neglecting forecast accuracy measurement and reporting you’re missing a critical piece of the puzzle for demand management success. Yes, it’s often a difficult, time-consuming and complex endeavor, but not doing it limits the prospects for success for the entire process. While calculating forecast accuracy is important, it’s not enough. You also need measurement and accuracy reports to determine the effectiveness of the entire demand management process. There are three main components of a demand management measurement tool and process:

  • Decide the method to calculate forecast accuracy
  • Determine how to calculate and eliminate any forecast bias in the process
  • Manage all necessary data to evaluate the effectiveness of the demand management process

Once these components are in place, it’s time to move on to determining added value in the forecast.

Forecast Accuracy

Forecast accuracy should be used to determine effectiveness, not to punish demand planners. You should be considering input and participation by sales, marketing, finance, senior management and statistical inputs. There should be several inputs and assumptions that lead to the agreed upon and executed forecast. The entire process should be measured to determine which assumptions are more accurate, focusing on the process that led to the final demand plan, and ultimately to the source of the errors. Manually creating this type of analysis is typically time-consuming, so use a dynamic, user-defined drill-down type tool. Recent research shows a 3% increase in forecast accuracy yields a 2% increase in profit margins [AMR Research, 2008].

Bias

Forecasting may prove to be the single most difficult issue your company faces. Millions have been spent attempting to improve forecast accuracy over the last few decades, but inaccuracies and bias still remain. Forecast bias is when the forecast is consistently higher or lower than actual demand. It’s usually introduced into the demand planning process by humans. Analysts or sales reps may forecast low if they’re incentivized for overachieving a sales forecast. If management expects unrealistic forecasts that are higher than the sales plan, the forecasts will typically be higher than actuals. While it does take time to identify a bias, trends will eventually emerge and must be managed out of the demand planning process. There are several ways to calculate, report, manage and even eliminate bias, but again, it can be a very time consuming process. Having an analytical tool to capture and report bias quickly is another critical component. If it is determined that bias exists in a forecast, you can try to manage it out by working more closely with the analysts or sales reps, or use an automated reporting tool to determine forecast accuracy and bias, and to help determine and shape future forecasts.

Data Management

In order to calculate forecast accuracy and bias, historical data needs to be captured and archived. Having an automated process to do this for you could prove critical for the success of the demand planning process. You’ll also need a tool that can be customized to allow for multiple versions, offers multi-option filtering and the investigation of what-if scenarios. The flexibility of these categories and this type of reporting is great for customized reports, giving consistency for the sales force, or for the demand planning team, so they can measure and monitor improvement. Regularly published, standardized reports should also be able to be auto-distributed to those who have a stake in the improvement of their forecasts. Some examples of custom categories include:

  • Customer/Location
  • Product
  • Forecast Comparison
  • Bias Range
  • Measure
  • Date Range
  • Lag Periods
  • Time Fence to Review
  • Error Range or Buckets
  • Segmentation
  • Product Life Cycle
  • Monthly Actual Carry-over Allowance

Value-Added Forecasting

One method to ensure your demand planning process is adding value is to measure the naïve forecast (also known as the simple rolling average) and compare it with the statistical model’s results and the analyst’s manual override to determine if the latter two add value. Having a tool to show comparisons quickly is effective for determining where the forecast information should be coming from. Sometimes, the more complex statistical models or the human element do not add value and need to be limited and/or removed.

What gets measured gets done

Forecast accuracy has to be measured where it’s meaningful. Was your business impacted because of an error at the item, category, or brand level? Does the location of the error matter in the big picture? Does it matter if you are accurate by week or month? Without exception, the key performance indicators should be sales forecast accuracy AND forecast bias. Having the ability to deep-dive into the error and quickly uncover bias is where real improvement and forecast accuracy can be impacted and improved. Once you eliminate forecast errors, your business can embrace the demand planning process and use it to drive business operations and success. You can view the whitepaper in its entirety on the Supply Chain Expert Community.

Looking for more great information from Celestica? Check out these other blogs in our series:

 


Additional Resources

Leave a Reply

CAPTCHA