Big Data Quality, a landmine or spring-board for your Digital Supply Chain initiatives?

7 Oct, 2020 •

Industry 4.0

Global Pandemic has become a catalyst for companies to revisit their global supply chain strategy and accelerate adoption of Digital Supply Networks (DSNs). Effective DSNs rely on data to be accessible, of high quality, and relevant. Read how Data Catalogs and Data Quality drive success for Digital Supply Network initiatives.

Data Accessibility and Data Quality landmines

Everyone is talking about digital transformation (DX), but when it comes to taking practical steps toward making DX a reality in your supply chain, there’s a lot of detail to consider.

Supply chain objectives remain very much the same as they were 20 years ago. Businesses want to increase efficiency, improve customer service levels, and reduce the cost of goods sold. They want to shorten lead times and increase on-time deliveries. They want to optimize their use of working capital by reducing inventory while maintaining high fill rates.

They want to do all of that while remaining flexible enough to respond to rapidly changing market conditions. That means faster planning, real-time monitoring, and agile execution. For decades, those have been the driving priorities for supply chain managers.

Digital transformation is a game changer, provided that it’s done right.

Supply chains have become broader and deeper. Products have gotten more complex. Perhaps most importantly, global supply chains have accelerated significantly, and customer expectations have risen to new levels.

Customers have grown accustomed to a vast array of product choices, tailored to their individual requirements and preferences. And they want it all delivered fast. Amazon Prime has reshaped customer expectations for speed and convenience.

To keep pace, companies have needed to make their supply chains leaner, faster, and more agile. Technology has been the key factor in making that transformation possible. SCM has become more data-driven and more automated. It has gone digital.

Data Accessibility and Data Quality landmines

According to a recent Gartner study on the role of data analytics in supply chain digitalization, only 15% of companies have no plans to invest in advanced analytics and big data. 

If you are one of remaining 85% of companies, then congratulations! You have already started down the road toward supply chain digitization. But how confident are you that it will yield the desired results?

Data is the foundation upon which every digital transformation initiative is built. If data quality is poor, you won’t get the quality insights you are expecting. At best, you’ll be operating well below optimal levels. At worst, you’ll be making decisions based on bad intelligence. 

In the Gartner study, 74% of companies ranked accuracy as one of their top three data quality issues. 32% elected as their single most important data quality challenge. Next in line were availability (ranked first by 27% of respondents) and completeness (ranked first by 16% of respondents).

48% of respondents indicated that a lack of cross-functional consensus of common data definitions is their top challenge in data governance and data quality management.

If your digital supply chain strategy doesn’t address these kinds of issues proactively, it will inevitably fall short of expectations.

Supply chain leaders must focus on getting the foundational elements of data quality right. Otherwise they risk having their investments in the digital supply chain fall far short of expectations.

“Digital Supply Chain initiatives will inevitably fall short if big data quality is not addressed proactively. Ongoing DQ efforts save money and reduce project risk.”

Data Catalog and Data Quality tools and processes as a springboard

Successful supply chain digitization relies on four key activities in data management:

Catalog your data. A data-driven supply chain begins with knowing where all of your data is. Cataloging your data provides a complete inventory of databases, applications, data warehouses, data lakes, and other data stores. If you don’t know where your data assets are, you can never have a comprehensive picture of your organization. 

As you begin your journey to supply chain digitization, having an enterprise data catalog just makes sense. Your data integration and migration design will proceed more efficiently because the data discovery process has already been completed.

A data catalog also enables effective self-service analytics. With a catalog of certified datasets, augmented and crowd-sourced by others throughout the organization, business analysts can be significantly more productive, delivering insights to decision-makers faster than they otherwise could.

Cataloging your data provides a foundation for effective data governance. That puts your organization in a better position to comply with data privacy regulations, to analyze IT issues, and to identify opportunities to put your data to work for you in new ways.

A data catalog doesn’t need to be a project unto itself. DvSum’s suite of data quality tools provides automated data cataloging capabilities out-of-the-box, with built-in ML/AI algorithms that can crawl your data stores and reports, autonomously classify data, define relationships, and deliver a data catalog that is 70-80% complete.

Start the data quality process now, and make it an ongoing practice. Poor data quality undermines the investments that your organization makes in increased efficiency and agility. If you’re embarking on a supply chain digitization effort, give yourself a leg up by driving higher levels of data quality throughout your organization. 

Whether it’s your lead-time parameters, price and cost information, or inventory attributes; chances are that bad data is already undermining your efforts at greater supply chain effectiveness. Focusing on data quality as you embark on new SCM project will produce immediate benefits to your operations while also driving better results for any new supply chain initiatives.

Cleanse & enrich before you migrate. To make your digital supply chain a reality, you will likely need to perform a data migration process somewhere along the way. Whether you are moving your existing applications to cloud or creating a new data lake to support a control tower; your current data will need to be migrated. 

While many see data cleansing as an integral part of the data migration process, there are good reasons to take a more proactive approach. Data migration can be a cumbersome (and often tedious) process. Starting out with clean data not only makes the process go smoothly, but it also ensures a higher degree of data integrity when process is complete.

Perhaps your product hierarchy needs more levels in order to create a more granular forecast, taking advantage of advanced computing capabilities. You may want to add more attributes to your product or customer master records to leverage AI/ML capabilities for micro-segmentation. In many cases, these kinds of transformations can best be performed in advance of the migration process. Cleansing and enrichment should be a deliberate and proactive part of any digitization strategy.

Rinse and repeat: To be successful with digitization, organizations must make an ongoing habit of data cleansing and validating their data. It’s one thing to start out with clean data, but as the volume of data increases, the likelihood of additional problems grows. 

Just because your data was complete, consistent, and accurate at go-live, doesn’t mean it will remain that way. Business data quality is an ongoing process. That means organizations must internalize data quality management as a core practice. No serious organization would neglect to apply security patches to their operating systems or to back up data on a regular basis. Likewise, any company that is serious about digital transformation must attend to data quality as an ongoing endeavor.

By empowering business users to define their own business exceptions and automating the auditing and exception management process, IT leaders can ensure that data remains high quality on a continuous basis.

Data Quality Pays for Itself

The MIT Sloan Management Review estimates the cost of bad data quality to be 15% to 25% of revenue for most companies. The authors of the MIT article estimate that “knowledge workers waste up to 50% of their time dealing with mundane data quality issues,” and that “for data scientists, this number may go as high as 80%.”

As companies implement powerful analytics and AI/machine learning capabilities, the “garbage-in garbage-out” caveat takes on an even greater importance.

At DvSum, we know data quality, and we understand the business context for digital transformation initiatives.. We produce tangible, measurable results for our customers, and we can demonstrate that. To find out more about how DvSum can support your efforts to optimize your supply chain, contact us for a free demo.

Share this post:

DvSum Autonomous Data Management System

About DvSum

DvSum’s cloud platform enables a disruptive approach to not only validate and align your data, but also actually fix it. The patented technology scans, checks and compares multiple data sources simultaneously - without moving or consolidating the data. DvSum's engine leverages machine learning and artificial intelligence to auto-discover and solve issues proactively. Along with the socially driven rules library, companies are able to connect and be fixing live data within hours.

Learn how DvSum can improve your business

Let's discuss with you, how we can help you get more value out of your data.

Related posts

Currently there is no related posts

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Posts