Data Preparation: Know Your Records!

October 23, 2012
46 Views

Data preparation in data mining and predictive analytics (dare I also say Data Science?) rightfully focuses on how the fields in one’s data should be represented so that modeling algorithms either will work properly or at least won’t be misled by the data. These data preprocessing steps may involve filling missing values, reigning in the effects of outliers, transforming fields so they better comply with algorithm assumptions, binning, and much more.

Data preparation in data mining and predictive analytics (dare I also say Data Science?) rightfully focuses on how the fields in one’s data should be represented so that modeling algorithms either will work properly or at least won’t be misled by the data. These data preprocessing steps may involve filling missing values, reigning in the effects of outliers, transforming fields so they better comply with algorithm assumptions, binning, and much more.

In recent weeks I’ve been reminded how important it is to know your records. I’ve heard this described in many ways, four of which are:

  • the unit of analysis
  • the level of aggregation
  • what a record represents
  • unique description of a record

For example, does each record represent a customer? If so, over their entire history or over a time period of interest? In web analytics, the time period of interest may be a single session, which if it is true, means that an individual customer may be in the modeling data multiple times as if each visit or session is an independent event. Where this especially matters is when disparate data sources are combined.

If one is joining a table of customerID/Session data with another table with each record representing a customerID, there’s no problem. But if the second table represents customerID/store visit data, there will obviously be a many-to-many join resulting in a big mess. This is probably obvious to most readers of this blog. What isn’t always obvious is when our assumptions about the data result in unexpected results.

What if we expect the unit of analysis to be customerID/Session but there are duplicates in the data? Or what if we had assumed customerID/Session data but it was in actuality customerID/Day data (where ones customers typically have one session per day, but could have a dozen)?

The answer is just like we need to perform a data audit to identify potential problems with fields in the data, we need to perform record audits to uncover unexpected record-level anomalies. We’ve all had those data sources where the DBA swears up and down that there are no dups in the data, but when we group by customerID/Session, we find 1000 dups. So before the joins and after joins, we need to do those group by operations to find examples with unexpected numbers of matches.

In conclusion: know what your records are supposed to represent, and verify verify verify. Otherwise, your models (who have no common sense) will exploit these issues in undesirable ways!

You may be interested

IEEE Big Data Conference 2017 to Highlight Challenges, Opportunities
Big Data
65 shares1,041 views
Big Data
65 shares1,041 views

IEEE Big Data Conference 2017 to Highlight Challenges, Opportunities

Ryan Kade - June 23, 2017

Since 2013, the Institute of Electrical and Electronics Engineers has held annual big data conferences to highlight changes and opportunities…

10 of the Top Marketing BI Software Options
Business Intelligence
117 shares1,513 views
Business Intelligence
117 shares1,513 views

10 of the Top Marketing BI Software Options

Hayden B. - June 23, 2017

Business can be complicated sometimes. It’s not always easy to keep track of all the data and information we deal…

The Race for 5G Is the Race for Data Dominance
Big Data
80 shares1,170 views
Big Data
80 shares1,170 views

The Race for 5G Is the Race for Data Dominance

Daniel Matthews - June 22, 2017

Have you noticed how often the phrase “by the year 2020” comes up? In the tech sphere, many are heralding…