Why Data Sampling Leads to Bad Decisions

3 Min Read

New technologies enabling terabyte-scale data analysis are causing a shift in the market away from sampling techniques. This is good because reducing sampling results in more accurate predictive analysis, which leads to better decisions and ultimately produces good things like:

  • Increased campaign response rates
  • Increased website conversion rates
  • Increased audience engagement
  • Increased customer loyalty

Chris Anderson’s article, “The Petabyte Age”, presents a number of compelling examples of how this shift away from sampling is changing the world (though I don’t agree with his related notion that simply having more data makes the scientific method obsolete).

Judah Phillips cites “Sampling, Sampling, Sampling” as one of the “reasons why web analytics data quality can stink”, stating that “… sampling… opens the possibility that you miss key data.” …

New technologies enabling terabyte-scale data analysis are causing a shift in the market away from sampling techniques. This is good because reducing sampling results in more accurate predictive analysis, which leads to better decisions and ultimately produces good things like:

  • Increased campaign response rates
  • Increased website conversion rates
  • Increased audience engagement
  • Increased customer loyalty

Chris Anderson’s article, “The Petabyte Age”, presents a number of compelling examples of how this shift away from sampling is changing the world (though I don’t agree with his related notion that simply having more data makes the scientific method obsolete).

Judah Phillips cites “Sampling, Sampling, Sampling” as one of the “reasons why web analytics data quality can stink”, stating that “… sampling… opens the possibility that you miss key data.”

Anand Rajaraman gave a compelling presentation at Predictive Analytics World last month entitled, “It’s the Data, Stupid!”, which built on ideas from his blog post, “More data usually beats better algorithms” , and pointed out that sampling is often less-than-optimal, stating that “it’s often better to use really simple algorithms to analyze really large datasets, rather than complex algorithms that can only work with smaller datasets.”

The bottom line is that predictive models are more accurate when they utilize a complete data set, because this approach completely avoids the risk of sampling error or bias.

Importantly, sampling is often being used to overcome the performance limitations of legacy technologies that were simply not designed to address the challenges of terabyte-scale data analysis.  Thankfully, that world has changed – technology has evolved – and for an increasingly common set of problems, sampling is no longer required (nor is it the optimal solution).  This is exciting because it opens up opportunities to solve challenging problems in ways previously not possible.

And clearly, the need to sample data is reduced as query and data load performance increase. In other words, performance matters.

Photo credit:  Paul Joseph

Share This Article
Exit mobile version