Big Data – Quantity vs Quality: Don’t confuse lots of data with good data

6 Min Read

In the paper today (San Diego Union Tribune, Sunday 1/15) there was a good article on the need for data miners given the explosion of data that is currently occurring.  One point of note was that one of the biggest challenges is making data usable.  “If you do have the data you need, it is often very dirty”

In the paper today (San Diego Union Tribune, Sunday 1/15) there was a good article on the need for data miners given the explosion of data that is currently occurring.  One point of note was that one of the biggest challenges is making data usable.  “If you do have the data you need, it is often very dirty”

With that article, and all the talk surrounding the “BIG DATA” world, I recall one of the fundamental rules in the computing world – Garbage In, Garbage Out.  Anyone who has ever worked with data bases, application programming, or analytics can tell you that while having lots of data is good, having good data is lots better. 

The problem is rumblings that some people think that just getting into the big data game will solve their woes, and that is not the case.  Many companies do not even have a handle on the “small data” (if I can coin a term here).  That is data that has structures that are relatively well known, the data is easily transformed and integrated into the corporate whole and has defined analytical purposes.  Surprising as it is, many companies are still struggling with the move from reports to analytics on this relatively small data.  Adding BIG DATA (not just volume but more so context and structure) into that mix can cause more confusion than clarity to the user community.

To just clarify my point, I certainly see the value and potential of incorporating big data into a complete picture of data.  I have long been a proponent for the inclusion of interaction data to a companies set of transaction data.  One can get a completely different picture of a customer by adding all sorts of data.  The point is that companies must understand that this creates a whole new set of data quality metrics and benchmarks as to what is clean enough of analytics and what is simply distractions and noise.

So how does one enter the big data pool without getting mired in the mud?  To start with I will provide a few of the steps I would take and then your questions and comments can determine if more needs to be posted.

First, ensure you have a good user community that is well steeped in the analytic process.  Are your users “looking at things” or “looking for things”.  If they are simply getting packaged reports, then adding big data will be more of a challenge.  In this situation, I would recommend starting with a small set of real data analysts (or now data scientist) that iteratively comb through sample sets of big data (social media, customer forums, web logs, etc) and see where are the missing opportunities or relationships which are currently unknown.  This insight can then be gradually cleansed and inserted into the user community by amending current reports and analytics.

Second, understand how the elements in your “big data” can correlate and join with the data in your enterprise.  By having the linkage between good solid corporate data and the contents of social media or customer reviews you can increase the value, and validity, of such data.  For example, getting feedback from a customer on products or experiences can be very useful.  It is even more useful if you can relate that information to the customers potential worth or past experiences such as purchases and returns.  You may not be able to stop the customer from posting comments but you can certainly get some context as to why the comments are being made.  All of this will increase aspects of data quality and allow you to address situations more completely.

Third, understand that much of the data that is coming in floods is not under your control.  The data is created in a multitude of environments, by an untold number of individuals, with little regard to rules or structure.  Trying to take that data and enforce quality may be more effort than it is worth.  By taking the data at face value and not attaching the importance that comes with “clean and audited” data you set the expectations in the right place, and that is much of the battle.  Where possible you can add filters and code consistency routines to improve the data quality. You can also use emerging tools to understand the semantics and emotion of free text and that will improve the quality and usage of that data.

Let’s start there and see where this goes.  I think there are going to be a lot of words written this year about big data.  Taking these simple steps upfront and preparing your company for the new challenges, and benefits, of this emerging arena will pay off for quite some time.

TAGGED:
Share This Article
Exit mobile version