Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    sales and data analytics
    How Data Analytics Improves Lead Management and Sales Results
    9 Min Read
    data analytics and truck accident claims
    How Data Analytics Reduces Truck Accidents and Speeds Up Claims
    7 Min Read
    predictive analytics for interior designers
    Interior Designers Boost Profits with Predictive Analytics
    8 Min Read
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
    big data and remote work
    Data Helps Speech-Language Pathologists Deliver Better Results
    6 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Defining Your Data Quality Problems
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Uncategorized > Defining Your Data Quality Problems
Uncategorized

Defining Your Data Quality Problems

martindoyle
martindoyle
8 Min Read
SHARE

Problem SolvingTo tackle any problem in a systematic and effective way, you must be able to break it down into parts. After all, understanding the problem is the first step to finding the solution.  From there, you can develop a strategic battle plan. With data quality, the same applies: every initiative features many stages and many different angles of attack.

When starting a data quality improvement program, it’s not enough to count the amount of records that are incorrect, or duplicated, in your database. Quantity only goes so far. You also need to know what kind of errors exist to allocate the correct resource.

Contents
To tackle any problem in a systematic and effective way, you must be able to break it down into parts. After all, understanding the problem is the first step to finding the solution.  From there, you can develop a strategic battle plan. To tackle any problem in a systematic and effective way, you must be able to break it down into parts. After all, understanding the problem is the first step to finding the solution.  From there, you can develop a strategic battle plan. With data quality, the same applies: every initiative features many stages and many different angles of attack.Types of DataCost to FixImproving AccuracyEnsuring Data Integrity

In this interesting blog by Jim Barker, the different types of data quality are broken down into two parts. In this article, we’ll look closely at defining these ‘types’, and how we can use this to our advantage when developing a budget.

Types of Data

Jim Barker – known as ‘Dr Data’ to some – has borrowed a simple medical concept to define data quality problems. His blog explains just how these two types fit together, and will be of interest to anyone who has struggled to find the data quality gremlins in their machine.

On the one hand, there’s the Type I data quality problem: things we can detect using automated tools. On the other hand, Type II is more enigmatic. You know the data quality problem is there, but it’s more difficult to detect and deal with, because it needs to be contextualised to be detected.

More Read

The Register: IBM iron predicts the future
First Look: IBM Decision Optimization Cloud
SOA adoption trends — what the data tells us
Intoxicated by the possibility of making mediocrity hard to sustain (#100ppl)
Dr. Zagat’s

The key differences can be simply and quickly defined:

  • Type I data quality problems require “know what” to identify: completeness, consistency, uniqueness and validity. These attributes can be picked up using data quality software, or even manually. You don’t need to have a lot of background knowledge, or a track record working with that data. It’s there, it’s wrong and you can track it down. For example, if we insert a 3 into a gender field, we can be sure that it is not a valid entry.

 

  • Type II data quality problems require “know how” for detection of timeliness, congruence and accuracy attributes. They require research, insight and experience and are not as simple or straightforward to detect. These datasets may appear free of problems, at least on the surface. The devil is in the detail, and it takes time to correct. Jim’s example is an employee record for someone who has retired. Without knowing the date of retirement, their data would otherwise appear to be correct.

The key takeaway is that data quality problems require a complex, strategic approach that is not uniform across a database. Once we break the data down, we start to see that it requires human and automated intervention – a dual attack.

Cost to Fix

So, how do we deal with Type I and Type II data quality problems? Are the costs comparable, or are they different beasts entirely?

The important thing to remember is that a Type I data validation or verification problem can be logically defined, and that means we can write software to find it and display it. Automated fixes are fast, inexpensive and can be completed with only occasional manual review. Think of Type I data quality problems as form field validation. Once valid, the problem disappears.

We could estimate that Type I data presents 80 per cent of our data quality problems, yet consumes 20 per cent of our budget.

Type II data needs the input of multiple parties so that it can be discovered, flagged up and eradicated. While every person in our CRM may have a date of purchase, that purchase date may be incorrect or not tally with an invoice or shipping manifest. Only specialists will be able to seed out problems and manually improve the CRM by carefully verifying its contents.

Often, businesses find it difficult to allocate the necessary resource – particularly if they have grown rapidly, or have high employee churn. While these Type II problems are fewer – perhaps the remaining 20 per cent of the database – they could require 80 per cent of our data quality budget, or more. If you continually lose staff who have that knowledge, and you fail to retain any of it over time, you will find Type II data much more difficult to deal with because the human detection element is lost.

Improving Accuracy

In order to improve data accuracy, we must work on Type I and Type II data as separate, but conjoined, problems. Fixing Type I data quality challenges can present quick wins, but Type II presents a challenge that human expertise can solve.

Over time, a database will always drift out of date, and this requires on-going and sustained effort. Data can be cleansed in situ, or validated at the point of entry, but Type I errors will still occur for a number of reasons; import/ export, corruption, manual edits, human error. Type II data problems will occur naturally, of their own accord; data that validates and looks correct may now be incorrect, simply because someone’s circumstances have changed.

Ensuring Data Integrity

Data informs business decisions and helps us get a clear picture of the world. Detecting Type I data quality problems is simple, inexpensive and quick. If your business has not yet adopted some kind of data quality software, there’s no doubt that it should be implemented to avoid waste, brand damage and inaccuracy.

As for Type II, the key is to understand that it exists and to implement new processes to prevent it from occurring. Workarounds and employee diversions from business processes will drag the data down. A failure to allocate subject matter experts could increase the amount of Type II over time. And as the proportion increase, so does the price of fixing it, because you need expert eyes on the data to weed it out. See the 1:10:100 Rule article.

Detecting and eradicating both types of problem is not impossible. One is easier than the other. Data quality vendors are continually looking at new ways to make high quality data simpler to achieve.

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

sales and data analytics
How Data Analytics Improves Lead Management and Sales Results
Analytics Big Data Exclusive
ai in marketing
How AI and Smart Platforms Improve Email Marketing
Artificial Intelligence Exclusive Marketing
AI Document Verification for Legal Firms: Importance & Top Tools
AI Document Verification for Legal Firms: Importance & Top Tools
Artificial Intelligence Exclusive
AI supply chain
AI Tools Are Strengthening Global Supply Chains
Artificial Intelligence Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

And Bing’s Strongest Vertical Is…Kayak?

3 Min Read

A 1948 Film About IT Systems Development: Mr. Blandings Builds His Dream House

3 Min Read

The Importance of a Social Media Support System

5 Min Read

Selling Out: It Sells

3 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data
ai chatbot
The Art of Conversation: Enhancing Chatbots with Advanced AI Prompts
Chatbots

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?