Is Big Data Failing?
“Big Data” just became more confusing. Just few weeks ago, the term was the darling of every investor. Every technology firm claimed they were in the Big Data space. Customers rushed to get themselves “some Big Data.”
But then, at Gartner’s Business Intelligence Summit in Barcelona, things changed. It all started in January, when, Svetlana Sicular, a Gartner analyst, exposed what many had feared: Big Data has officially landed in Gartner’s “Trough of Disillusionment.” What does that mean? Is Big Data failing? If you’re frantically running through this post to find out if the world of data as we know it has ended, you can slow down: this story actually ends well.
Many will likely use Gartner’s analysis to focus on the bad side of this new development. But let’s try to be positive and think about the future. If you remind your team, customers and partners of the two factors below, you should be able to weather through this situation and speed your way into Gartner’s next phase: “The Slope of Enlightment.”
Analytics Is Big Data’s Killer App
Svetlana Sicular’s conclusions seem to be based on her interactions with customers and vendors who are invested in Hadoop’s infrastructure. At my company, SiSense, we have spent countless hours with such customers. No matter the industry, they all express the same disillusion: “We’ve spent months laying out our Big Data infrastructure, yet, we feel we are months away from gaining insights from our data.”
The Big Data ecosystem has seen an “attention dissymmetry” between Big Data Infrastructure versus Big Data Analytics. The attention paid to infrastructure over applications is represented in Gartner’s own forecasts: the research firm evaluated that $30 billion was spent on Big Data infrastructure in 2012, whereas it said that the data exploration market accounted for about $7 billion.
But Analytics is where you and your team have the most leverage on Big Data. Analytics is what business users work with. Analytics is the last mile that turns average competitors in data-driven winners. If Hadoop is an elephant, Analytics is its rider. So, if you are working on a “Big Data” project today, make sure you prepare your “Analytics Plan” and make sure that it is part of your infrastructure solution early on – you’ll avoid some major disillusion.
Big Data Gone Wild
The current debate around Big Data reminds me the old “parable of the blind men and an elephant.” The tale describes how a group of blind men try to describe what an elephant is like after touching it. The tale exposes the concept of pluralism – the idea that truth is perceived differently from diverse points of view.
Arguing about the state of Big Data, when we don’t know what we are looking at is a similar mistake, in my humble opinion. In their post, Gartner seems to use the term “Hadoop,” “Big Data,” “Petabyte-scale” interchangeably. Many have warned about such assimilation though. I even went as far as suggesting that a company knows it has “Big Data problems” when its infrastructure (human or technical) can’t keep up with its data needs – regardless the size, velocity or variety of its data.
If you look at Big Data beyond Hadoop, you’ll find that many more companies are doing “Big Data” without necessarily calling it that. For instance, read EMA’s latest report on Big Data size. You’ll find that the majority of companies report their most common data sizes start at 110GB and most fits between 10 to 30TB. For them Big Data is about Terabytes, not Petabytes.
There are many more aspects of this debate we can discuss – many of which I summarized in this presentation. Would love to hear your take!
Bruno Aziza is the author of “Drive Business Performance” and Chief Marketing Officer at Alpine Data Labs. Prior to Alpine Data Labs, Bruno worked at BusinessObjects, Apple and Microsoft. Bruno has guest lectured at Stanford University in the US and the Cranfield School of Management in the UK. He was educated in France, Germany, the UK and the US. You can connect with him directly ...