Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    big data analytics in transporation
    Turning Data Into Decisions: How Analytics Improves Transportation Strategy
    3 Min Read
    sales and data analytics
    How Data Analytics Improves Lead Management and Sales Results
    9 Min Read
    data analytics and truck accident claims
    How Data Analytics Reduces Truck Accidents and Speeds Up Claims
    7 Min Read
    predictive analytics for interior designers
    Interior Designers Boost Profits with Predictive Analytics
    8 Min Read
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Guiding Principles for Data Enrichment
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Data Management > Best Practices > Guiding Principles for Data Enrichment
Best PracticesData ManagementData Warehousing

Guiding Principles for Data Enrichment

boblambert12
boblambert12
5 Min Read
Image
SHARE

ImageThe data integration process is traditionally thought of in three steps: extract, transform, and load (ETL). Putting aside the often-discussed order of their execution, “extract” is pulling data out of a source system, “transform” means validating the source data and converting it to the desired standard (e.g. yards to meters), and load means storing the data at the destination.

ImageThe data integration process is traditionally thought of in three steps: extract, transform, and load (ETL). Putting aside the often-discussed order of their execution, “extract” is pulling data out of a source system, “transform” means validating the source data and converting it to the desired standard (e.g. yards to meters), and load means storing the data at the destination.

An additional step, data “enrichment”, has recently emerged, offering significant improvement in business value of integrated data. Applying it effectively requires a foundation of sound data management practices. 

Data integrators traditionally bring data from source to target unchanged. It’s as if ETL developers were movers who prided themselves on putting your furniture in the new place unbroken. Businesses today are asking the movers to repair and improve the furniture before landing it in the new house.

More Read

Zynga: A Big Data Company Masquerading as a Gaming Company
Smart Grid: Overcoming Data Exchange to Increase Efficiency
Mission: To convert the power in high altitude winds into clean…
Visier Workforce Analytics After the Show
What Does The Rise of Blockchain Technology Mean For Big Data?

The most obvious enrichment example is address correction. When you enter your address on some US e-commerce sites, the site corrects it by standardizing street, city, and state fields, and adding the last four digits of the zip code. ETL vendors tout many possibilities beyond address correction. One lists these types of information that can be added, or “augmented“, to a demographics database, presumably from databases that vendor can provide:

  • “Geographic: such as post code, county name, longitude and latitude, and political district
  • “Behavioral: including purchases, credit risk and preferred communication channels
  • “Demographic: such as income, marital status, education, age and number of children
  • “Psychographic: ranging from hobbies and interests to political affiliation
  • “Census: household and community data”

Enrichment isn’t limited to demographics. Data quality tools like this one allow definition of rules that integrate into the ETL stream for any data source:

  • Matching incoming records with existing data, like identifying to which insured member a claim applies
  • Correcting invalid data based on other data in the record, like correcting an out-of-bounds hand-entered measurement based on an independent automated data feed
  • Interpolating missing values based on other available data. So while loading a pregnancy related claim the system might fill in a missing value for gender.

As you can imagine, changing source data runs counter to most integrators’ instincts. And yes, it’s risky. Operations that automatically match, correct, or interpolate data values operate with some “confidence” level, meaning that sometimes they are wrong. I worked in one customer service organization whose matching routines processed tens of millions of customer records with 95% confidence. That meant that hundreds of thousands of matches may have been incorrect — not necessarily an issue for the particular application involved, but something for those implementing enrichment to consider.

Given those risks, I suggest these three guiding principles for organizations adding enrichment to their data integration streams:

  • The business should drive and manage enrichment definition: Data stewards who understand the incoming data and the intended use must be the key drivers of what data is enriched, how it is done, and test of the enrichment outcomes.
  • Enriched data must be identifiable and audit-able in the target database: Any integration target database should feature complete lineage metadata: where is this data element from, when was it loaded, and what happened to it along the way. This is even more true for data added by interpolating from, augmenting, matching, or correcting source data. Analysts must know which data came directly from the source, which was generated, and the confidence level of the latter.
  • Data replaced by enrichment must be available alongside the enriched data: Enrichment processes must store modified or added data in such a way that analysts have access to the “raw” source data. Analysts should be able to independently test enrichment processes and suggest improvements if needed. If, for whatever reason, enrichment doesn’t meet specific analysis needs, then they should be able to fall back to the original source data.

By following these three guiding principles, organizations can ensure that they deploy enrichment processes that enhance business value of integrated data while minimizing risk and maximizing flexibility as requirements evolve.  

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

AI role in medical industry
The Role Of AI In Transforming Medical Manufacturing
Artificial Intelligence Exclusive
b2b sales
Unseen Barriers: Identifying Bottlenecks In B2B Sales
Business Rules Exclusive Infographic
data intelligence in healthcare
How Data Is Powering Real-Time Intelligence in Health Systems
Big Data Exclusive
intersection of data
The Intersection of Data and Empathy in Modern Support Careers
Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Seven Ways to Rejuvenate Your Marketing Database

5 Min Read
Image
Big DataBusiness IntelligenceData MiningData WarehousingHadoopITMapReduceOpen SourceSoftwareWorkforce Data

How Big Data Can Improve Manufacturing Quality

4 Min Read
Image
Big DataPrivacy

Big Data Privacy Is About You, Me, Them, and Us

8 Min Read

Are Business Intelligence dashboards on the brink of extinction?

14 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

AI chatbots
AI Chatbots Can Help Retailers Convert Live Broadcast Viewers into Sales!
Chatbots
giveaway chatbots
How To Get An Award Winning Giveaway Bot
Big Data Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?