When starting a data quality improvement program, it’s not enough to count the amount of records that are incorrect, or duplicated, in your database. Quantity only goes so far. You also need to know what kind of errors exist to allocate the correct resource.
In this interesting blog by Jim Barker, the different types of data quality are broken down into two parts. In this article, we’ll look closely at defining these ‘types’, and how we can use this to our advantage when developing a budget.
Jim Barker – known as ‘Dr Data’ to some – has borrowed a simple medical concept to define data quality problems. His blog explains just how these two types fit together, and will be of interest to anyone who has struggled to find the data quality gremlins in their machine.
On the one hand, there’s the Type I data quality problem: things we can detect using automated tools. On the other hand, Type II is more enigmatic. You know the data quality problem is there, but it’s more difficult to detect and deal with, because it needs to be contextualised to be detected.
The key differences can be simply and quickly defined:
The key takeaway is that data quality problems require a complex, strategic approach that is not uniform across a database. Once we break the data down, we start to see that it requires human and automated intervention – a dual attack.
So, how do we deal with Type I and Type II data quality problems? Are the costs comparable, or are they different beasts entirely?
The important thing to remember is that a Type I data validation or verification problem can be logically defined, and that means we can write software to find it and display it. Automated fixes are fast, inexpensive and can be completed with only occasional manual review. Think of Type I data quality problems as form field validation. Once valid, the problem disappears.
We could estimate that Type I data presents 80 per cent of our data quality problems, yet consumes 20 per cent of our budget.
Type II data needs the input of multiple parties so that it can be discovered, flagged up and eradicated. While every person in our CRM may have a date of purchase, that purchase date may be incorrect or not tally with an invoice or shipping manifest. Only specialists will be able to seed out problems and manually improve the CRM by carefully verifying its contents.
Often, businesses find it difficult to allocate the necessary resource – particularly if they have grown rapidly, or have high employee churn. While these Type II problems are fewer – perhaps the remaining 20 per cent of the database – they could require 80 per cent of our data quality budget, or more. If you continually lose staff who have that knowledge, and you fail to retain any of it over time, you will find Type II data much more difficult to deal with because the human detection element is lost.
In order to improve data accuracy, we must work on Type I and Type II data as separate, but conjoined, problems. Fixing Type I data quality challenges can present quick wins, but Type II presents a challenge that human expertise can solve.
Over time, a database will always drift out of date, and this requires on-going and sustained effort. Data can be cleansed in situ, or validated at the point of entry, but Type I errors will still occur for a number of reasons; import/ export, corruption, manual edits, human error. Type II data problems will occur naturally, of their own accord; data that validates and looks correct may now be incorrect, simply because someone’s circumstances have changed.
Data informs business decisions and helps us get a clear picture of the world. Detecting Type I data quality problems is simple, inexpensive and quick. If your business has not yet adopted some kind of data quality software, there’s no doubt that it should be implemented to avoid waste, brand damage and inaccuracy.
As for Type II, the key is to understand that it exists and to implement new processes to prevent it from occurring. Workarounds and employee diversions from business processes will drag the data down. A failure to allocate subject matter experts could increase the amount of Type II over time. And as the proportion increase, so does the price of fixing it, because you need expert eyes on the data to weed it out. See the 1:10:100 Rule article.
Detecting and eradicating both types of problem is not impossible. One is easier than the other. Data quality vendors are continually looking at new ways to make high quality data simpler to achieve.