Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    composable analytics
    How Composable Analytics Unlocks Modular Agility for Data Teams
    9 Min Read
    data mining to find the right poly bag makers
    Using Data Analytics to Choose the Best Poly Mailer Bags
    12 Min Read
    data analytics for pharmacy trends
    How Data Analytics Is Tracking Trends in the Pharmacy Industry
    5 Min Read
    car expense data analytics
    Data Analytics for Smarter Vehicle Expense Management
    10 Min Read
    image fx (60)
    Data Analytics Driving the Modern E-commerce Warehouse
    13 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Data Preparation: Know Your Records!
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > Data Preparation: Know Your Records!
Data MiningData QualityPredictive Analytics

Data Preparation: Know Your Records!

DeanAbbott
DeanAbbott
4 Min Read
SHARE

Data preparation in data mining and predictive analytics (dare I also say Data Science?) rightfully focuses on how the fields in one’s data should be represented so that modeling algorithms either will work properly or at least won’t be misled by the data. These data preprocessing steps may involve filling missing values, reigning in the effects of outliers, transforming fields so they better comply with algorithm assumptions, binning, and much more.

Data preparation in data mining and predictive analytics (dare I also say Data Science?) rightfully focuses on how the fields in one’s data should be represented so that modeling algorithms either will work properly or at least won’t be misled by the data. These data preprocessing steps may involve filling missing values, reigning in the effects of outliers, transforming fields so they better comply with algorithm assumptions, binning, and much more.

In recent weeks I’ve been reminded how important it is to know your records. I’ve heard this described in many ways, four of which are:

  • the unit of analysis
  • the level of aggregation
  • what a record represents
  • unique description of a record

For example, does each record represent a customer? If so, over their entire history or over a time period of interest? In web analytics, the time period of interest may be a single session, which if it is true, means that an individual customer may be in the modeling data multiple times as if each visit or session is an independent event. Where this especially matters is when disparate data sources are combined.

More Read

Need for a Robust Data Quality Framework for Big Data
“Some is not a number and soon is not a time”
Dawn of a new era: On-site – Off-site Integrated Marketing
4 Ways Predictive Analytics Will Improve Healthcare
The Financial Times New Search Engine

If one is joining a table of customerID/Session data with another table with each record representing a customerID, there’s no problem. But if the second table represents customerID/store visit data, there will obviously be a many-to-many join resulting in a big mess. This is probably obvious to most readers of this blog. What isn’t always obvious is when our assumptions about the data result in unexpected results.

What if we expect the unit of analysis to be customerID/Session but there are duplicates in the data? Or what if we had assumed customerID/Session data but it was in actuality customerID/Day data (where ones customers typically have one session per day, but could have a dozen)?

The answer is just like we need to perform a data audit to identify potential problems with fields in the data, we need to perform record audits to uncover unexpected record-level anomalies. We’ve all had those data sources where the DBA swears up and down that there are no dups in the data, but when we group by customerID/Session, we find 1000 dups. So before the joins and after joins, we need to do those group by operations to find examples with unexpected numbers of matches.

In conclusion: know what your records are supposed to represent, and verify verify verify. Otherwise, your models (who have no common sense) will exploit these issues in undesirable ways!

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

student learning AI
Advanced Degrees Still Matter in an AI-Driven Job Market
Artificial Intelligence Exclusive
mobile device farm
How Mobile Device Farms Strengthen Big Data Workflows
Big Data Exclusive
composable analytics
How Composable Analytics Unlocks Modular Agility for Data Teams
Analytics Big Data Exclusive
fintech startups
Why Fintech Start-Ups Struggle To Secure The Funding They Need
Infographic News

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Winners of Mozilla Open Data Competition announced

3 Min Read

Why you shouldn’t use JPGs for quantitative charts: a case study

3 Min Read

Simple Tools for Building a Recommendation Engine

9 Min Read

Economic: Indian Caste System -Simplification

4 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive
ai is improving the safety of cars
From Bolts to Bots: How AI Is Fortifying the Automotive Industry
Artificial Intelligence

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?