How Decision Intelligence is Solving Enterprise Data Quality Challenges

Solving Data Integrity 02-2023 Blog Image

“Where there is data smoke, there is business fire.”
– Thomas Redman, author

Every executive knows that data quality is key to success. Intuitively, lack of data prevents effective decision making as teams don’t have complete visibility into operations. Poor data accuracy impedes effective analytics and may lead executives to incorrect conclusions. Finally, and perhaps the most challenging to solve for, enterprise data inconsistencies may lead different teams to arrive at different outcomes as they optimize for their own domains with what they believe is complete and accurate data.

Businesses have attempted to address these challenges in numerous ways. To improve data completeness, teams have embarked on journeys to build better reporting, create new dashboards, and implement systems and applications that provide everything from control towers to track-and-trace capabilities – all to attain a full, live view of what is happening at all times.

Executives believe that end-to-end visibility will enable the business to troubleshoot problems more quickly, while they can take the pulse of the operation and steer the ship as needed. However, as the volume of data has exploded in the past decade, the challenge of analyzing, making sense of the information, and making the right decisions has grown only more complex.

As much as 30% of employees’ time is spent on non-value-added tasks due to poor data quality issues.

Furthermore, end-to-end visibility is pointless if a portion of the data is incorrect. To combat this, businesses have established data integrity governance models to monitor and correct information. These measures often have limited success, as most businesses have difficulty quantifying the return on investment and end up treating data integrity efforts and the resources allocated to them as a cost center to keep at bay.

The differences are staggering. It is estimated that employees of the average business spend close to 30% of their time on non-value-added tasks resulting from poor data quality and limited availability, whereas that time is under 10% at leading firms.[1]

Finally, many organizations are siloed with each functional lead attempting to improve data integrity within his or her own scope. This often causes inconsistencies between systems across functions to be overlooked. These issues are typically caught at higher levels of the organization when they are spotted in cross-functional reports, or within ERPs that store and manage enterprise-wide data, but many still go unnoticed.

There are systemic ways to address these issues, ranging from process controls (for example, having teams running queries to monitor the environment) to enhanced reporting, to increased integration between applications – but all of these come with increased costs and greater complexity, either in resourcing, technical debt, or both.

Decision Intelligence: Designed for Data Integrity

“The core advantage of data is that it tells you something about the world that you didn’t know before.”
– Hilary Mason, data scientist and founder of Fast Forward Labs

While a robust governance model and an adequately staffed cross-functional team are necessary to ensure the highest levels of data integrity, Decision Intelligence platforms have native capabilities that can expedite this journey and help maintain standards with lower costs.

First, by ingesting data across ERPs, systems, and data sources, these platforms create a harmonized data model which simplifies the process of identifying data quality issues. With access to the entire tech stack, teams can configure simple rules to find data gaps, then input either the correct value or the best available proxy given a defined heuristic. This process can be used for essentially any data, from a missing product attribute, to a shipment ID, to a material cost.

Similarly, rules can be configured for the Decision Intelligence platform to identify out-of-bounds values and correct them. Because the platform has the ability to ingest and harmonize both internal and external data sets, this could be a check against a reference value in another application or an external data set. Alternatively, this could be a natively-computed value calculated from a specified formula, called from a particular function or model, or obtained by running an optimizer – among other means. Examples include lead times, safety stocks, production capacity, and margin.

In these scenarios, a threshold can be configured such that if the value found deviates by more than a given percentage from the expected outcome, the platform will surface a recommendation to update – or even execute the edit autonomously, if desired.

Perhaps the most powerful capability of a Decision Intelligence platform is the ability to orchestrate information across systems, maintaining consistency of information across the ecosystem. Teams can configure a broad range of business rules, which can be as simple as making all systems and applications match the main ERP records (i.e., to make every application align with SAP) – or as complex as configuring an entire decision tree to define which value is the best fit, then writing back to all underlying systems with that definitive value (i.e., recalibrating prices across all systems based on demand-sensing inputs).

Leveraging Machine Scale and Speed

The goal with these data management mechanisms is to improve data quality progressively, leading to a day where AI can manage data checks and fixes automatically.

At the onset, the data integrity team would receive prescriptive recommendations on what to fix in order to make the data more accurate, complete, or consistent. This first stage of the journey is about fine-tuning the logic and letting the AI train on both the data and user behavior, as the team continues to accept or reject recommendations.

As this process continues, the next phase may allow the AI to maintain data integrity for non-critical information, leaving attributes such as pricing or customer information to be reviewed or maintained by team members. Finally, the team can define business rules so that the platform handles some, or all, data integrity corrections. In this last stage, the user’s role is to handle exceptions and ensure that the decision logic is up-to-date as business conditions change.

With Decision Intelligence managing data integrity at machine scale and speed, companies will devote less time and fewer resources to reporting and integrations, significantly reducing costs while achieving higher levels of data integrity.

Executives know that the bedrock of technology and effective operations rests on the richness of complete, accurate and consistent data. One way to get started is to anchor the implementation of Decision Intelligence and its application to Data Management with broader digital transformation themes.

As Decision Intelligence platforms are used to support, improve, and ultimately automate data-integrity efforts, companies will begin to see productivity gains among their data integrity teams, quickly allowing them to elevate both roles and governance. Ultimately, the resulting efficiency gains and lower costs will contribute to the value realized over time from Decision Intelligence.

Watch our on-demand webinar, “Solving Data Integrity with Decision Intelligence,” to learn more about how Aera Decision Cloud can help your company improve data quality and, with it, the quality of business decisions.

See Aera in action.

Schedule Demo