Supply chain data: Bad data delivered faster is still bad data, and leads to bad decisions

addtoany linkedin

When approaching the concept of knowing sooner and acting faster, and the value of concurrent planning, the most common feedback I get from supply chain executives is:

“But my data is bad… I mean, really bad.”

It doesn’t matter if they’re high tech, consumer packaged goods, aerospace, automotive, or life science. The answer is so often the same. And trust me, I’ve seen some companies whose data is worthy of a top 10 list of the worst data around, including companies where:

  • A bill of material is only 10% accurate
  • The only inventory records are ‘inventory receipt date’ and ‘inventory ship date’
  • Routings are done in 20+ Excel spreadsheets

At Kinexions ’17, our annual user conference, we’ll have a customer examine this trend of bad supply chain data in its presentation, Seeing the Light at the End of the Data Tunnel, and showcase how it changed its bad data into good. Instead of turning back to fix data first, this customer went after the gaps and process breakdowns that had previously been a black hole. I love asking supply chain leaders, “So if your data is bad today, how would you assess your data 5 years ago, or 10 years ago?” Inevitably, the answer is “it was bad back then.” As a follow-up, with the impending growth of complexity, globalization and digital data, I ask, “What do you think your supply chain data will look like in 5 years?” The point is that data has been bad, is bad and will continue to be bad. Clearly, proving ‘data fix’ projects aren’t the answer. After 25 years leading supply chains, I’ve found two absolute truths.

  1. I can be 100% sure your internal supply chain data has accuracy issues
  2. I can be 100% sure if you include your suppliers, their ERP and data don’t match yours

Root cause

There are many forms of bad data. I’ve found the most common issue is accuracy. Accuracy issues can range from no data, to suspect data, to clearly the wrong value. The challenge isn’t necessarily to get it accurate, as much as it is to keep it accurate. Supply chains are defined by volatility, so your ability to adjust quickly is needed not just in a material disruption, but also in data disruption. Most every supply chain I’ve seen, dating back to my AMR Research days, are plagued by one simple root cause for bad data – human error inputting. What’s worse, is that supply chain planners take this bad data into Excel, and multiple Excel’s, and make the adjustments in a vacuum. Now, the problem isn’t just bad data, it’s uncontrolled logic in the supply chain plan. And the problem doesn’t stop with planners in Excel.

  • Planners take inefficient time to compare spreadsheets and come to a decision, wasting time in MPS meetings
  • Planners take inefficient time to create PowerPoints to drive executive decision, instead of using 20+ spreadsheets to make an S&OP decision
  • Every node in the network takes inefficient time to check the input data, delaying collaboration

The end result is that supply chain leaders have multiple, delayed versions of the truth, with no time for their planners to address the fix.

The new model to fix bad data

Fixing the data first, in my opinion, is the wrong model. Why? Because planners just don’t have the time. As an example, what if a planner spent 2.15 hours per day gathering data, normalizing data, searching data and comparing data. From my experience, I don’t think anyone would argue that in some cases, that’s an optimistic estimate. If we extrapolate this to 250 planners, that’s over 67 full time employees working data. The model was to first give time back to planners, then focus on bad data. This was accomplished by first loading the data, albeit some bad, into a single solution. Planners then had one solution, with one user interface, and eliminated the multiple excel sheets. There was no more need to gather data, normalize data, search for exceptions and compare multiple spreadsheets. The 67.2 FTE time savings was the catalyst to fix the data challenge. With one view, it was easy to see what data was bad, and prioritized key data that needed fixes to execute the plan. Planners simulated “what if the data was X,” and leveraged the data scorecards to know when data had been changed.

The importance of recognizing bad data

We’ve tried the model of “fix the data first”, but, as many leaders have seen, the results are still bad data. And, with complexity coming faster with a digital supply chain, data growth is inevitable. A new model to address bad data is needed. The first key is to get time back to your planners. This can only be accomplished by looking at what planners do every day. Inheritably, they spend inefficient time in spreadsheets. Use this time, and a single platform to not only get your end-to-end supply chain data accurate, but also keep it accurate. What do you say about the new bad data fix model?

Leave a Reply

CAPTCHA